Oct 12 16:18:55 np0005481680 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 12 16:18:55 np0005481680 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 12 16:18:55 np0005481680 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 12 16:18:55 np0005481680 kernel: BIOS-provided physical RAM map:
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 12 16:18:55 np0005481680 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 12 16:18:55 np0005481680 kernel: NX (Execute Disable) protection: active
Oct 12 16:18:55 np0005481680 kernel: APIC: Static calls initialized
Oct 12 16:18:55 np0005481680 kernel: SMBIOS 2.8 present.
Oct 12 16:18:55 np0005481680 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 12 16:18:55 np0005481680 kernel: Hypervisor detected: KVM
Oct 12 16:18:55 np0005481680 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 12 16:18:55 np0005481680 kernel: kvm-clock: using sched offset of 4428334771 cycles
Oct 12 16:18:55 np0005481680 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 12 16:18:55 np0005481680 kernel: tsc: Detected 2799.998 MHz processor
Oct 12 16:18:55 np0005481680 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 12 16:18:55 np0005481680 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 12 16:18:55 np0005481680 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 12 16:18:55 np0005481680 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 12 16:18:55 np0005481680 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 12 16:18:55 np0005481680 kernel: Using GB pages for direct mapping
Oct 12 16:18:55 np0005481680 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 12 16:18:55 np0005481680 kernel: ACPI: Early table checksum verification disabled
Oct 12 16:18:55 np0005481680 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 12 16:18:55 np0005481680 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 12 16:18:55 np0005481680 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 12 16:18:55 np0005481680 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 12 16:18:55 np0005481680 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 12 16:18:55 np0005481680 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 12 16:18:55 np0005481680 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 12 16:18:55 np0005481680 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 12 16:18:55 np0005481680 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 12 16:18:55 np0005481680 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 12 16:18:55 np0005481680 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 12 16:18:55 np0005481680 kernel: No NUMA configuration found
Oct 12 16:18:55 np0005481680 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 12 16:18:55 np0005481680 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 12 16:18:55 np0005481680 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 12 16:18:55 np0005481680 kernel: Zone ranges:
Oct 12 16:18:55 np0005481680 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 12 16:18:55 np0005481680 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 12 16:18:55 np0005481680 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 12 16:18:55 np0005481680 kernel:  Device   empty
Oct 12 16:18:55 np0005481680 kernel: Movable zone start for each node
Oct 12 16:18:55 np0005481680 kernel: Early memory node ranges
Oct 12 16:18:55 np0005481680 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 12 16:18:55 np0005481680 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 12 16:18:55 np0005481680 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 12 16:18:55 np0005481680 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 12 16:18:55 np0005481680 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 12 16:18:55 np0005481680 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 12 16:18:55 np0005481680 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 12 16:18:55 np0005481680 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 12 16:18:55 np0005481680 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 12 16:18:55 np0005481680 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 12 16:18:55 np0005481680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 12 16:18:55 np0005481680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 12 16:18:55 np0005481680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 12 16:18:55 np0005481680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 12 16:18:55 np0005481680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 12 16:18:55 np0005481680 kernel: TSC deadline timer available
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Max. logical packages:   8
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Max. logical dies:       8
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Max. dies per package:   1
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Max. threads per core:   1
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Num. cores per package:     1
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Num. threads per package:   1
Oct 12 16:18:55 np0005481680 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 12 16:18:55 np0005481680 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 12 16:18:55 np0005481680 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 12 16:18:55 np0005481680 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 12 16:18:55 np0005481680 kernel: Booting paravirtualized kernel on KVM
Oct 12 16:18:55 np0005481680 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 12 16:18:55 np0005481680 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 12 16:18:55 np0005481680 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 12 16:18:55 np0005481680 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 12 16:18:55 np0005481680 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 12 16:18:55 np0005481680 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 12 16:18:55 np0005481680 kernel: random: crng init done
Oct 12 16:18:55 np0005481680 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: Fallback order for Node 0: 0 
Oct 12 16:18:55 np0005481680 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 12 16:18:55 np0005481680 kernel: Policy zone: Normal
Oct 12 16:18:55 np0005481680 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 12 16:18:55 np0005481680 kernel: software IO TLB: area num 8.
Oct 12 16:18:55 np0005481680 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 12 16:18:55 np0005481680 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 12 16:18:55 np0005481680 kernel: ftrace: allocated 193 pages with 3 groups
Oct 12 16:18:55 np0005481680 kernel: Dynamic Preempt: voluntary
Oct 12 16:18:55 np0005481680 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 12 16:18:55 np0005481680 kernel: rcu: #011RCU event tracing is enabled.
Oct 12 16:18:55 np0005481680 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 12 16:18:55 np0005481680 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 12 16:18:55 np0005481680 kernel: #011Rude variant of Tasks RCU enabled.
Oct 12 16:18:55 np0005481680 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 12 16:18:55 np0005481680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 12 16:18:55 np0005481680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 12 16:18:55 np0005481680 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 12 16:18:55 np0005481680 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 12 16:18:55 np0005481680 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 12 16:18:55 np0005481680 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 12 16:18:55 np0005481680 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 12 16:18:55 np0005481680 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 12 16:18:55 np0005481680 kernel: Console: colour VGA+ 80x25
Oct 12 16:18:55 np0005481680 kernel: printk: console [ttyS0] enabled
Oct 12 16:18:55 np0005481680 kernel: ACPI: Core revision 20230331
Oct 12 16:18:55 np0005481680 kernel: APIC: Switch to symmetric I/O mode setup
Oct 12 16:18:55 np0005481680 kernel: x2apic enabled
Oct 12 16:18:55 np0005481680 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 12 16:18:55 np0005481680 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 12 16:18:55 np0005481680 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 12 16:18:55 np0005481680 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 12 16:18:55 np0005481680 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 12 16:18:55 np0005481680 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 12 16:18:55 np0005481680 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 12 16:18:55 np0005481680 kernel: Spectre V2 : Mitigation: Retpolines
Oct 12 16:18:55 np0005481680 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 12 16:18:55 np0005481680 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 12 16:18:55 np0005481680 kernel: RETBleed: Mitigation: untrained return thunk
Oct 12 16:18:55 np0005481680 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 12 16:18:55 np0005481680 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 12 16:18:55 np0005481680 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 12 16:18:55 np0005481680 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 12 16:18:55 np0005481680 kernel: x86/bugs: return thunk changed
Oct 12 16:18:55 np0005481680 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 12 16:18:55 np0005481680 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 12 16:18:55 np0005481680 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 12 16:18:55 np0005481680 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 12 16:18:55 np0005481680 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 12 16:18:55 np0005481680 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 12 16:18:55 np0005481680 kernel: Freeing SMP alternatives memory: 40K
Oct 12 16:18:55 np0005481680 kernel: pid_max: default: 32768 minimum: 301
Oct 12 16:18:55 np0005481680 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 12 16:18:55 np0005481680 kernel: landlock: Up and running.
Oct 12 16:18:55 np0005481680 kernel: Yama: becoming mindful.
Oct 12 16:18:55 np0005481680 kernel: SELinux:  Initializing.
Oct 12 16:18:55 np0005481680 kernel: LSM support for eBPF active
Oct 12 16:18:55 np0005481680 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 12 16:18:55 np0005481680 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 12 16:18:55 np0005481680 kernel: ... version:                0
Oct 12 16:18:55 np0005481680 kernel: ... bit width:              48
Oct 12 16:18:55 np0005481680 kernel: ... generic registers:      6
Oct 12 16:18:55 np0005481680 kernel: ... value mask:             0000ffffffffffff
Oct 12 16:18:55 np0005481680 kernel: ... max period:             00007fffffffffff
Oct 12 16:18:55 np0005481680 kernel: ... fixed-purpose events:   0
Oct 12 16:18:55 np0005481680 kernel: ... event mask:             000000000000003f
Oct 12 16:18:55 np0005481680 kernel: signal: max sigframe size: 1776
Oct 12 16:18:55 np0005481680 kernel: rcu: Hierarchical SRCU implementation.
Oct 12 16:18:55 np0005481680 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 12 16:18:55 np0005481680 kernel: smp: Bringing up secondary CPUs ...
Oct 12 16:18:55 np0005481680 kernel: smpboot: x86: Booting SMP configuration:
Oct 12 16:18:55 np0005481680 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 12 16:18:55 np0005481680 kernel: smp: Brought up 1 node, 8 CPUs
Oct 12 16:18:55 np0005481680 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 12 16:18:55 np0005481680 kernel: node 0 deferred pages initialised in 13ms
Oct 12 16:18:55 np0005481680 kernel: Memory: 7766044K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 12 16:18:55 np0005481680 kernel: devtmpfs: initialized
Oct 12 16:18:55 np0005481680 kernel: x86/mm: Memory block size: 128MB
Oct 12 16:18:55 np0005481680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 12 16:18:55 np0005481680 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: pinctrl core: initialized pinctrl subsystem
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 12 16:18:55 np0005481680 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 12 16:18:55 np0005481680 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 12 16:18:55 np0005481680 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 12 16:18:55 np0005481680 kernel: audit: initializing netlink subsys (disabled)
Oct 12 16:18:55 np0005481680 kernel: audit: type=2000 audit(1760300333.596:1): state=initialized audit_enabled=0 res=1
Oct 12 16:18:55 np0005481680 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 12 16:18:55 np0005481680 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 12 16:18:55 np0005481680 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 12 16:18:55 np0005481680 kernel: cpuidle: using governor menu
Oct 12 16:18:55 np0005481680 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 12 16:18:55 np0005481680 kernel: PCI: Using configuration type 1 for base access
Oct 12 16:18:55 np0005481680 kernel: PCI: Using configuration type 1 for extended access
Oct 12 16:18:55 np0005481680 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 12 16:18:55 np0005481680 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 12 16:18:55 np0005481680 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 12 16:18:55 np0005481680 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 12 16:18:55 np0005481680 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 12 16:18:55 np0005481680 kernel: Demotion targets for Node 0: null
Oct 12 16:18:55 np0005481680 kernel: cryptd: max_cpu_qlen set to 1000
Oct 12 16:18:55 np0005481680 kernel: ACPI: Added _OSI(Module Device)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Added _OSI(Processor Device)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 12 16:18:55 np0005481680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 12 16:18:55 np0005481680 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 12 16:18:55 np0005481680 kernel: ACPI: Interpreter enabled
Oct 12 16:18:55 np0005481680 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 12 16:18:55 np0005481680 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 12 16:18:55 np0005481680 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 12 16:18:55 np0005481680 kernel: PCI: Using E820 reservations for host bridge windows
Oct 12 16:18:55 np0005481680 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 12 16:18:55 np0005481680 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [3] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [4] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [5] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [6] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [7] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [8] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [9] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [10] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [11] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [12] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [13] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [14] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [15] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [16] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [17] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [18] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [19] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [20] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [21] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [22] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [23] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [24] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [25] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [26] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [27] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [28] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [29] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [30] registered
Oct 12 16:18:55 np0005481680 kernel: acpiphp: Slot [31] registered
Oct 12 16:18:55 np0005481680 kernel: PCI host bridge to bus 0000:00
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 12 16:18:55 np0005481680 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 12 16:18:55 np0005481680 kernel: iommu: Default domain type: Translated
Oct 12 16:18:55 np0005481680 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 12 16:18:55 np0005481680 kernel: SCSI subsystem initialized
Oct 12 16:18:55 np0005481680 kernel: ACPI: bus type USB registered
Oct 12 16:18:55 np0005481680 kernel: usbcore: registered new interface driver usbfs
Oct 12 16:18:55 np0005481680 kernel: usbcore: registered new interface driver hub
Oct 12 16:18:55 np0005481680 kernel: usbcore: registered new device driver usb
Oct 12 16:18:55 np0005481680 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 12 16:18:55 np0005481680 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 12 16:18:55 np0005481680 kernel: PTP clock support registered
Oct 12 16:18:55 np0005481680 kernel: EDAC MC: Ver: 3.0.0
Oct 12 16:18:55 np0005481680 kernel: NetLabel: Initializing
Oct 12 16:18:55 np0005481680 kernel: NetLabel:  domain hash size = 128
Oct 12 16:18:55 np0005481680 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 12 16:18:55 np0005481680 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 12 16:18:55 np0005481680 kernel: PCI: Using ACPI for IRQ routing
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 12 16:18:55 np0005481680 kernel: vgaarb: loaded
Oct 12 16:18:55 np0005481680 kernel: clocksource: Switched to clocksource kvm-clock
Oct 12 16:18:55 np0005481680 kernel: VFS: Disk quotas dquot_6.6.0
Oct 12 16:18:55 np0005481680 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 12 16:18:55 np0005481680 kernel: pnp: PnP ACPI init
Oct 12 16:18:55 np0005481680 kernel: pnp: PnP ACPI: found 5 devices
Oct 12 16:18:55 np0005481680 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_INET protocol family
Oct 12 16:18:55 np0005481680 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 12 16:18:55 np0005481680 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_XDP protocol family
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 12 16:18:55 np0005481680 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 12 16:18:55 np0005481680 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 12 16:18:55 np0005481680 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75745 usecs
Oct 12 16:18:55 np0005481680 kernel: PCI: CLS 0 bytes, default 64
Oct 12 16:18:55 np0005481680 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 12 16:18:55 np0005481680 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 12 16:18:55 np0005481680 kernel: ACPI: bus type thunderbolt registered
Oct 12 16:18:55 np0005481680 kernel: Trying to unpack rootfs image as initramfs...
Oct 12 16:18:55 np0005481680 kernel: Initialise system trusted keyrings
Oct 12 16:18:55 np0005481680 kernel: Key type blacklist registered
Oct 12 16:18:55 np0005481680 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 12 16:18:55 np0005481680 kernel: zbud: loaded
Oct 12 16:18:55 np0005481680 kernel: integrity: Platform Keyring initialized
Oct 12 16:18:55 np0005481680 kernel: integrity: Machine keyring initialized
Oct 12 16:18:55 np0005481680 kernel: Freeing initrd memory: 85808K
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_ALG protocol family
Oct 12 16:18:55 np0005481680 kernel: xor: automatically using best checksumming function   avx       
Oct 12 16:18:55 np0005481680 kernel: Key type asymmetric registered
Oct 12 16:18:55 np0005481680 kernel: Asymmetric key parser 'x509' registered
Oct 12 16:18:55 np0005481680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 12 16:18:55 np0005481680 kernel: io scheduler mq-deadline registered
Oct 12 16:18:55 np0005481680 kernel: io scheduler kyber registered
Oct 12 16:18:55 np0005481680 kernel: io scheduler bfq registered
Oct 12 16:18:55 np0005481680 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 12 16:18:55 np0005481680 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 12 16:18:55 np0005481680 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 12 16:18:55 np0005481680 kernel: ACPI: button: Power Button [PWRF]
Oct 12 16:18:55 np0005481680 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 12 16:18:55 np0005481680 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 12 16:18:55 np0005481680 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 12 16:18:55 np0005481680 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 12 16:18:55 np0005481680 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 12 16:18:55 np0005481680 kernel: Non-volatile memory driver v1.3
Oct 12 16:18:55 np0005481680 kernel: rdac: device handler registered
Oct 12 16:18:55 np0005481680 kernel: hp_sw: device handler registered
Oct 12 16:18:55 np0005481680 kernel: emc: device handler registered
Oct 12 16:18:55 np0005481680 kernel: alua: device handler registered
Oct 12 16:18:55 np0005481680 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 12 16:18:55 np0005481680 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 12 16:18:55 np0005481680 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 12 16:18:55 np0005481680 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 12 16:18:55 np0005481680 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 12 16:18:55 np0005481680 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 12 16:18:55 np0005481680 kernel: usb usb1: Product: UHCI Host Controller
Oct 12 16:18:55 np0005481680 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 12 16:18:55 np0005481680 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 12 16:18:55 np0005481680 kernel: hub 1-0:1.0: USB hub found
Oct 12 16:18:55 np0005481680 kernel: hub 1-0:1.0: 2 ports detected
Oct 12 16:18:55 np0005481680 kernel: usbcore: registered new interface driver usbserial_generic
Oct 12 16:18:55 np0005481680 kernel: usbserial: USB Serial support registered for generic
Oct 12 16:18:55 np0005481680 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 12 16:18:55 np0005481680 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 12 16:18:55 np0005481680 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 12 16:18:55 np0005481680 kernel: mousedev: PS/2 mouse device common for all mice
Oct 12 16:18:55 np0005481680 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 12 16:18:55 np0005481680 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 12 16:18:55 np0005481680 kernel: rtc_cmos 00:04: registered as rtc0
Oct 12 16:18:55 np0005481680 kernel: rtc_cmos 00:04: setting system clock to 2025-10-12T20:18:54 UTC (1760300334)
Oct 12 16:18:55 np0005481680 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 12 16:18:55 np0005481680 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 12 16:18:55 np0005481680 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 12 16:18:55 np0005481680 kernel: usbcore: registered new interface driver usbhid
Oct 12 16:18:55 np0005481680 kernel: usbhid: USB HID core driver
Oct 12 16:18:55 np0005481680 kernel: drop_monitor: Initializing network drop monitor service
Oct 12 16:18:55 np0005481680 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 12 16:18:55 np0005481680 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 12 16:18:55 np0005481680 kernel: Initializing XFRM netlink socket
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_INET6 protocol family
Oct 12 16:18:55 np0005481680 kernel: Segment Routing with IPv6
Oct 12 16:18:55 np0005481680 kernel: NET: Registered PF_PACKET protocol family
Oct 12 16:18:55 np0005481680 kernel: mpls_gso: MPLS GSO support
Oct 12 16:18:55 np0005481680 kernel: IPI shorthand broadcast: enabled
Oct 12 16:18:55 np0005481680 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 12 16:18:55 np0005481680 kernel: AES CTR mode by8 optimization enabled
Oct 12 16:18:55 np0005481680 kernel: sched_clock: Marking stable (1225009486, 144794733)->(1483686739, -113882520)
Oct 12 16:18:55 np0005481680 kernel: registered taskstats version 1
Oct 12 16:18:55 np0005481680 kernel: Loading compiled-in X.509 certificates
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 12 16:18:55 np0005481680 kernel: Demotion targets for Node 0: null
Oct 12 16:18:55 np0005481680 kernel: page_owner is disabled
Oct 12 16:18:55 np0005481680 kernel: Key type .fscrypt registered
Oct 12 16:18:55 np0005481680 kernel: Key type fscrypt-provisioning registered
Oct 12 16:18:55 np0005481680 kernel: Key type big_key registered
Oct 12 16:18:55 np0005481680 kernel: Key type encrypted registered
Oct 12 16:18:55 np0005481680 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 12 16:18:55 np0005481680 kernel: Loading compiled-in module X.509 certificates
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 12 16:18:55 np0005481680 kernel: ima: Allocated hash algorithm: sha256
Oct 12 16:18:55 np0005481680 kernel: ima: No architecture policies found
Oct 12 16:18:55 np0005481680 kernel: evm: Initialising EVM extended attributes:
Oct 12 16:18:55 np0005481680 kernel: evm: security.selinux
Oct 12 16:18:55 np0005481680 kernel: evm: security.SMACK64 (disabled)
Oct 12 16:18:55 np0005481680 kernel: evm: security.SMACK64EXEC (disabled)
Oct 12 16:18:55 np0005481680 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 12 16:18:55 np0005481680 kernel: evm: security.SMACK64MMAP (disabled)
Oct 12 16:18:55 np0005481680 kernel: evm: security.apparmor (disabled)
Oct 12 16:18:55 np0005481680 kernel: evm: security.ima
Oct 12 16:18:55 np0005481680 kernel: evm: security.capability
Oct 12 16:18:55 np0005481680 kernel: evm: HMAC attrs: 0x1
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 12 16:18:55 np0005481680 kernel: Running certificate verification RSA selftest
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 12 16:18:55 np0005481680 kernel: Running certificate verification ECDSA selftest
Oct 12 16:18:55 np0005481680 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 12 16:18:55 np0005481680 kernel: clk: Disabling unused clocks
Oct 12 16:18:55 np0005481680 kernel: Freeing unused decrypted memory: 2028K
Oct 12 16:18:55 np0005481680 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 12 16:18:55 np0005481680 kernel: Write protecting the kernel read-only data: 30720k
Oct 12 16:18:55 np0005481680 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: Manufacturer: QEMU
Oct 12 16:18:55 np0005481680 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 12 16:18:55 np0005481680 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 12 16:18:55 np0005481680 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 12 16:18:55 np0005481680 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 12 16:18:55 np0005481680 kernel: Run /init as init process
Oct 12 16:18:55 np0005481680 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 12 16:18:55 np0005481680 systemd: Detected virtualization kvm.
Oct 12 16:18:55 np0005481680 systemd: Detected architecture x86-64.
Oct 12 16:18:55 np0005481680 systemd: Running in initrd.
Oct 12 16:18:55 np0005481680 systemd: No hostname configured, using default hostname.
Oct 12 16:18:55 np0005481680 systemd: Hostname set to <localhost>.
Oct 12 16:18:55 np0005481680 systemd: Initializing machine ID from VM UUID.
Oct 12 16:18:55 np0005481680 systemd: Queued start job for default target Initrd Default Target.
Oct 12 16:18:55 np0005481680 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 12 16:18:55 np0005481680 systemd: Reached target Local Encrypted Volumes.
Oct 12 16:18:55 np0005481680 systemd: Reached target Initrd /usr File System.
Oct 12 16:18:55 np0005481680 systemd: Reached target Local File Systems.
Oct 12 16:18:55 np0005481680 systemd: Reached target Path Units.
Oct 12 16:18:55 np0005481680 systemd: Reached target Slice Units.
Oct 12 16:18:55 np0005481680 systemd: Reached target Swaps.
Oct 12 16:18:55 np0005481680 systemd: Reached target Timer Units.
Oct 12 16:18:55 np0005481680 systemd: Listening on D-Bus System Message Bus Socket.
Oct 12 16:18:55 np0005481680 systemd: Listening on Journal Socket (/dev/log).
Oct 12 16:18:55 np0005481680 systemd: Listening on Journal Socket.
Oct 12 16:18:55 np0005481680 systemd: Listening on udev Control Socket.
Oct 12 16:18:55 np0005481680 systemd: Listening on udev Kernel Socket.
Oct 12 16:18:55 np0005481680 systemd: Reached target Socket Units.
Oct 12 16:18:55 np0005481680 systemd: Starting Create List of Static Device Nodes...
Oct 12 16:18:55 np0005481680 systemd: Starting Journal Service...
Oct 12 16:18:55 np0005481680 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 12 16:18:55 np0005481680 systemd: Starting Apply Kernel Variables...
Oct 12 16:18:55 np0005481680 systemd: Starting Create System Users...
Oct 12 16:18:55 np0005481680 systemd: Starting Setup Virtual Console...
Oct 12 16:18:55 np0005481680 systemd: Finished Create List of Static Device Nodes.
Oct 12 16:18:55 np0005481680 systemd: Finished Apply Kernel Variables.
Oct 12 16:18:55 np0005481680 systemd: Finished Create System Users.
Oct 12 16:18:55 np0005481680 systemd-journald[306]: Journal started
Oct 12 16:18:55 np0005481680 systemd-journald[306]: Runtime Journal (/run/log/journal/7d715b3e003b4a6c84d2be911b9b9ce7) is 8.0M, max 153.6M, 145.6M free.
Oct 12 16:18:55 np0005481680 systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct 12 16:18:55 np0005481680 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct 12 16:18:55 np0005481680 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 12 16:18:55 np0005481680 systemd: Started Journal Service.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 12 16:18:55 np0005481680 systemd[1]: Starting Create Volatile Files and Directories...
Oct 12 16:18:55 np0005481680 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 12 16:18:55 np0005481680 systemd[1]: Finished Setup Virtual Console.
Oct 12 16:18:55 np0005481680 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting dracut cmdline hook...
Oct 12 16:18:55 np0005481680 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Oct 12 16:18:55 np0005481680 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 12 16:18:55 np0005481680 systemd[1]: Finished Create Volatile Files and Directories.
Oct 12 16:18:55 np0005481680 systemd[1]: Finished dracut cmdline hook.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting dracut pre-udev hook...
Oct 12 16:18:55 np0005481680 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 12 16:18:55 np0005481680 kernel: device-mapper: uevent: version 1.0.3
Oct 12 16:18:55 np0005481680 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 12 16:18:55 np0005481680 kernel: RPC: Registered named UNIX socket transport module.
Oct 12 16:18:55 np0005481680 kernel: RPC: Registered udp transport module.
Oct 12 16:18:55 np0005481680 kernel: RPC: Registered tcp transport module.
Oct 12 16:18:55 np0005481680 kernel: RPC: Registered tcp-with-tls transport module.
Oct 12 16:18:55 np0005481680 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 12 16:18:55 np0005481680 rpc.statd[443]: Version 2.5.4 starting
Oct 12 16:18:55 np0005481680 rpc.statd[443]: Initializing NSM state
Oct 12 16:18:55 np0005481680 rpc.idmapd[448]: Setting log level to 0
Oct 12 16:18:55 np0005481680 systemd[1]: Finished dracut pre-udev hook.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 12 16:18:55 np0005481680 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 12 16:18:55 np0005481680 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting dracut pre-trigger hook...
Oct 12 16:18:55 np0005481680 systemd[1]: Finished dracut pre-trigger hook.
Oct 12 16:18:55 np0005481680 systemd[1]: Starting Coldplug All udev Devices...
Oct 12 16:18:56 np0005481680 systemd[1]: Created slice Slice /system/modprobe.
Oct 12 16:18:56 np0005481680 systemd[1]: Starting Load Kernel Module configfs...
Oct 12 16:18:56 np0005481680 systemd[1]: Finished Coldplug All udev Devices.
Oct 12 16:18:56 np0005481680 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 12 16:18:56 np0005481680 systemd[1]: Finished Load Kernel Module configfs.
Oct 12 16:18:56 np0005481680 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Network.
Oct 12 16:18:56 np0005481680 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 12 16:18:56 np0005481680 systemd[1]: Starting dracut initqueue hook...
Oct 12 16:18:56 np0005481680 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 12 16:18:56 np0005481680 kernel: scsi host0: ata_piix
Oct 12 16:18:56 np0005481680 kernel: scsi host1: ata_piix
Oct 12 16:18:56 np0005481680 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 12 16:18:56 np0005481680 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 12 16:18:56 np0005481680 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 12 16:18:56 np0005481680 kernel: vda: vda1
Oct 12 16:18:56 np0005481680 systemd[1]: Mounting Kernel Configuration File System...
Oct 12 16:18:56 np0005481680 systemd[1]: Mounted Kernel Configuration File System.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target System Initialization.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Basic System.
Oct 12 16:18:56 np0005481680 kernel: ata1: found unknown device (class 0)
Oct 12 16:18:56 np0005481680 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 12 16:18:56 np0005481680 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 12 16:18:56 np0005481680 systemd-udevd[473]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:18:56 np0005481680 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 12 16:18:56 np0005481680 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 12 16:18:56 np0005481680 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 12 16:18:56 np0005481680 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Initrd Root Device.
Oct 12 16:18:56 np0005481680 systemd[1]: Finished dracut initqueue hook.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 12 16:18:56 np0005481680 systemd[1]: Reached target Remote File Systems.
Oct 12 16:18:56 np0005481680 systemd[1]: Starting dracut pre-mount hook...
Oct 12 16:18:56 np0005481680 systemd[1]: Finished dracut pre-mount hook.
Oct 12 16:18:56 np0005481680 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 12 16:18:56 np0005481680 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Oct 12 16:18:56 np0005481680 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 12 16:18:56 np0005481680 systemd[1]: Mounting /sysroot...
Oct 12 16:18:57 np0005481680 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 12 16:18:57 np0005481680 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 12 16:18:57 np0005481680 kernel: XFS (vda1): Ending clean mount
Oct 12 16:18:57 np0005481680 systemd[1]: Mounted /sysroot.
Oct 12 16:18:57 np0005481680 systemd[1]: Reached target Initrd Root File System.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 12 16:18:57 np0005481680 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 12 16:18:57 np0005481680 systemd[1]: Reached target Initrd File Systems.
Oct 12 16:18:57 np0005481680 systemd[1]: Reached target Initrd Default Target.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting dracut mount hook...
Oct 12 16:18:57 np0005481680 systemd[1]: Finished dracut mount hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 12 16:18:57 np0005481680 rpc.idmapd[448]: exiting on signal 15
Oct 12 16:18:57 np0005481680 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Network.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Timer Units.
Oct 12 16:18:57 np0005481680 systemd[1]: dbus.socket: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Initrd Default Target.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Basic System.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Initrd Root Device.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Initrd /usr File System.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Path Units.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Remote File Systems.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Slice Units.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Socket Units.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target System Initialization.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Local File Systems.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Swaps.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut mount hook.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut pre-mount hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut initqueue hook.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Apply Kernel Variables.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Coldplug All udev Devices.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut pre-trigger hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Setup Virtual Console.
Oct 12 16:18:57 np0005481680 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-udevd.service: Consumed 1.058s CPU time.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Closed udev Control Socket.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Closed udev Kernel Socket.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut pre-udev hook.
Oct 12 16:18:57 np0005481680 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped dracut cmdline hook.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting Cleanup udev Database...
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 12 16:18:57 np0005481680 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 12 16:18:57 np0005481680 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Stopped Create System Users.
Oct 12 16:18:57 np0005481680 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 12 16:18:57 np0005481680 systemd[1]: Finished Cleanup udev Database.
Oct 12 16:18:57 np0005481680 systemd[1]: Reached target Switch Root.
Oct 12 16:18:57 np0005481680 systemd[1]: Starting Switch Root...
Oct 12 16:18:57 np0005481680 systemd[1]: Switching root.
Oct 12 16:18:57 np0005481680 systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Oct 12 16:18:57 np0005481680 systemd-journald[306]: Journal stopped
Oct 12 16:18:58 np0005481680 kernel: audit: type=1404 audit(1760300337.724:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:18:58 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:18:58 np0005481680 kernel: audit: type=1403 audit(1760300337.862:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 12 16:18:58 np0005481680 systemd: Successfully loaded SELinux policy in 142.722ms.
Oct 12 16:18:58 np0005481680 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.334ms.
Oct 12 16:18:58 np0005481680 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 12 16:18:58 np0005481680 systemd: Detected virtualization kvm.
Oct 12 16:18:58 np0005481680 systemd: Detected architecture x86-64.
Oct 12 16:18:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:18:58 np0005481680 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd: Stopped Switch Root.
Oct 12 16:18:58 np0005481680 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 12 16:18:58 np0005481680 systemd: Created slice Slice /system/getty.
Oct 12 16:18:58 np0005481680 systemd: Created slice Slice /system/serial-getty.
Oct 12 16:18:58 np0005481680 systemd: Created slice Slice /system/sshd-keygen.
Oct 12 16:18:58 np0005481680 systemd: Created slice User and Session Slice.
Oct 12 16:18:58 np0005481680 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 12 16:18:58 np0005481680 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 12 16:18:58 np0005481680 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 12 16:18:58 np0005481680 systemd: Reached target Local Encrypted Volumes.
Oct 12 16:18:58 np0005481680 systemd: Stopped target Switch Root.
Oct 12 16:18:58 np0005481680 systemd: Stopped target Initrd File Systems.
Oct 12 16:18:58 np0005481680 systemd: Stopped target Initrd Root File System.
Oct 12 16:18:58 np0005481680 systemd: Reached target Local Integrity Protected Volumes.
Oct 12 16:18:58 np0005481680 systemd: Reached target Path Units.
Oct 12 16:18:58 np0005481680 systemd: Reached target rpc_pipefs.target.
Oct 12 16:18:58 np0005481680 systemd: Reached target Slice Units.
Oct 12 16:18:58 np0005481680 systemd: Reached target Swaps.
Oct 12 16:18:58 np0005481680 systemd: Reached target Local Verity Protected Volumes.
Oct 12 16:18:58 np0005481680 systemd: Listening on RPCbind Server Activation Socket.
Oct 12 16:18:58 np0005481680 systemd: Reached target RPC Port Mapper.
Oct 12 16:18:58 np0005481680 systemd: Listening on Process Core Dump Socket.
Oct 12 16:18:58 np0005481680 systemd: Listening on initctl Compatibility Named Pipe.
Oct 12 16:18:58 np0005481680 systemd: Listening on udev Control Socket.
Oct 12 16:18:58 np0005481680 systemd: Listening on udev Kernel Socket.
Oct 12 16:18:58 np0005481680 systemd: Mounting Huge Pages File System...
Oct 12 16:18:58 np0005481680 systemd: Mounting POSIX Message Queue File System...
Oct 12 16:18:58 np0005481680 systemd: Mounting Kernel Debug File System...
Oct 12 16:18:58 np0005481680 systemd: Mounting Kernel Trace File System...
Oct 12 16:18:58 np0005481680 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 12 16:18:58 np0005481680 systemd: Starting Create List of Static Device Nodes...
Oct 12 16:18:58 np0005481680 systemd: Starting Load Kernel Module configfs...
Oct 12 16:18:58 np0005481680 systemd: Starting Load Kernel Module drm...
Oct 12 16:18:58 np0005481680 systemd: Starting Load Kernel Module efi_pstore...
Oct 12 16:18:58 np0005481680 systemd: Starting Load Kernel Module fuse...
Oct 12 16:18:58 np0005481680 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 12 16:18:58 np0005481680 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd: Stopped File System Check on Root Device.
Oct 12 16:18:58 np0005481680 systemd: Stopped Journal Service.
Oct 12 16:18:58 np0005481680 systemd: Starting Journal Service...
Oct 12 16:18:58 np0005481680 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 12 16:18:58 np0005481680 systemd: Starting Generate network units from Kernel command line...
Oct 12 16:18:58 np0005481680 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 12 16:18:58 np0005481680 systemd: Starting Remount Root and Kernel File Systems...
Oct 12 16:18:58 np0005481680 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 12 16:18:58 np0005481680 systemd: Starting Apply Kernel Variables...
Oct 12 16:18:58 np0005481680 kernel: fuse: init (API version 7.37)
Oct 12 16:18:58 np0005481680 systemd: Starting Coldplug All udev Devices...
Oct 12 16:18:58 np0005481680 systemd: Mounted Huge Pages File System.
Oct 12 16:18:58 np0005481680 systemd: Mounted POSIX Message Queue File System.
Oct 12 16:18:58 np0005481680 systemd: Mounted Kernel Debug File System.
Oct 12 16:18:58 np0005481680 systemd: Mounted Kernel Trace File System.
Oct 12 16:18:58 np0005481680 systemd-journald[678]: Journal started
Oct 12 16:18:58 np0005481680 systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 12 16:18:58 np0005481680 systemd[1]: Queued start job for default target Multi-User System.
Oct 12 16:18:58 np0005481680 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd: Started Journal Service.
Oct 12 16:18:58 np0005481680 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Create List of Static Device Nodes.
Oct 12 16:18:58 np0005481680 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Load Kernel Module configfs.
Oct 12 16:18:58 np0005481680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 12 16:18:58 np0005481680 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Load Kernel Module fuse.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Generate network units from Kernel command line.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Apply Kernel Variables.
Oct 12 16:18:58 np0005481680 kernel: ACPI: bus type drm_connector registered
Oct 12 16:18:58 np0005481680 systemd[1]: Mounting FUSE Control File System...
Oct 12 16:18:58 np0005481680 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Rebuild Hardware Database...
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 12 16:18:58 np0005481680 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Load/Save OS Random Seed...
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Create System Users...
Oct 12 16:18:58 np0005481680 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Load Kernel Module drm.
Oct 12 16:18:58 np0005481680 systemd[1]: Mounted FUSE Control File System.
Oct 12 16:18:58 np0005481680 systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 12 16:18:58 np0005481680 systemd-journald[678]: Received client request to flush runtime journal.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Load/Save OS Random Seed.
Oct 12 16:18:58 np0005481680 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Create System Users.
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Coldplug All udev Devices.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 12 16:18:58 np0005481680 systemd[1]: Reached target Preparation for Local File Systems.
Oct 12 16:18:58 np0005481680 systemd[1]: Reached target Local File Systems.
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 12 16:18:58 np0005481680 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 12 16:18:58 np0005481680 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 12 16:18:58 np0005481680 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Automatic Boot Loader Update...
Oct 12 16:18:58 np0005481680 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Create Volatile Files and Directories...
Oct 12 16:18:58 np0005481680 bootctl[695]: Couldn't find EFI system partition, skipping.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Automatic Boot Loader Update.
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Create Volatile Files and Directories.
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Security Auditing Service...
Oct 12 16:18:58 np0005481680 systemd[1]: Starting RPC Bind...
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Rebuild Journal Catalog...
Oct 12 16:18:58 np0005481680 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 12 16:18:58 np0005481680 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Rebuild Journal Catalog.
Oct 12 16:18:58 np0005481680 systemd[1]: Started RPC Bind.
Oct 12 16:18:58 np0005481680 augenrules[706]: /sbin/augenrules: No change
Oct 12 16:18:58 np0005481680 augenrules[721]: No rules
Oct 12 16:18:58 np0005481680 augenrules[721]: enabled 1
Oct 12 16:18:58 np0005481680 augenrules[721]: failure 1
Oct 12 16:18:58 np0005481680 augenrules[721]: pid 701
Oct 12 16:18:58 np0005481680 augenrules[721]: rate_limit 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_limit 8192
Oct 12 16:18:58 np0005481680 augenrules[721]: lost 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog 3
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time 60000
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time_actual 0
Oct 12 16:18:58 np0005481680 augenrules[721]: enabled 1
Oct 12 16:18:58 np0005481680 augenrules[721]: failure 1
Oct 12 16:18:58 np0005481680 augenrules[721]: pid 701
Oct 12 16:18:58 np0005481680 augenrules[721]: rate_limit 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_limit 8192
Oct 12 16:18:58 np0005481680 augenrules[721]: lost 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog 3
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time 60000
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time_actual 0
Oct 12 16:18:58 np0005481680 augenrules[721]: enabled 1
Oct 12 16:18:58 np0005481680 augenrules[721]: failure 1
Oct 12 16:18:58 np0005481680 augenrules[721]: pid 701
Oct 12 16:18:58 np0005481680 augenrules[721]: rate_limit 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_limit 8192
Oct 12 16:18:58 np0005481680 augenrules[721]: lost 0
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog 3
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time 60000
Oct 12 16:18:58 np0005481680 augenrules[721]: backlog_wait_time_actual 0
Oct 12 16:18:58 np0005481680 systemd[1]: Started Security Auditing Service.
Oct 12 16:18:58 np0005481680 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 12 16:18:58 np0005481680 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 12 16:18:59 np0005481680 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 12 16:18:59 np0005481680 systemd[1]: Finished Rebuild Hardware Database.
Oct 12 16:18:59 np0005481680 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 12 16:18:59 np0005481680 systemd[1]: Starting Update is Completed...
Oct 12 16:18:59 np0005481680 systemd[1]: Finished Update is Completed.
Oct 12 16:18:59 np0005481680 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Oct 12 16:18:59 np0005481680 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target System Initialization.
Oct 12 16:18:59 np0005481680 systemd[1]: Started dnf makecache --timer.
Oct 12 16:18:59 np0005481680 systemd[1]: Started Daily rotation of log files.
Oct 12 16:18:59 np0005481680 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target Timer Units.
Oct 12 16:18:59 np0005481680 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 12 16:18:59 np0005481680 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target Socket Units.
Oct 12 16:18:59 np0005481680 systemd[1]: Starting D-Bus System Message Bus...
Oct 12 16:18:59 np0005481680 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 12 16:18:59 np0005481680 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 12 16:18:59 np0005481680 systemd[1]: Starting Load Kernel Module configfs...
Oct 12 16:18:59 np0005481680 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 12 16:18:59 np0005481680 systemd[1]: Finished Load Kernel Module configfs.
Oct 12 16:18:59 np0005481680 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:18:59 np0005481680 systemd[1]: Started D-Bus System Message Bus.
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target Basic System.
Oct 12 16:18:59 np0005481680 dbus-broker-lau[744]: Ready
Oct 12 16:18:59 np0005481680 systemd[1]: Starting NTP client/server...
Oct 12 16:18:59 np0005481680 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 12 16:18:59 np0005481680 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 12 16:18:59 np0005481680 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 12 16:18:59 np0005481680 systemd[1]: Starting IPv4 firewall with iptables...
Oct 12 16:18:59 np0005481680 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 12 16:18:59 np0005481680 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 12 16:18:59 np0005481680 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 12 16:18:59 np0005481680 systemd[1]: Started irqbalance daemon.
Oct 12 16:18:59 np0005481680 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 12 16:18:59 np0005481680 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 16:18:59 np0005481680 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 16:18:59 np0005481680 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target sshd-keygen.target.
Oct 12 16:18:59 np0005481680 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 12 16:18:59 np0005481680 systemd[1]: Reached target User and Group Name Lookups.
Oct 12 16:18:59 np0005481680 systemd[1]: Starting User Login Management...
Oct 12 16:18:59 np0005481680 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 12 16:18:59 np0005481680 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 12 16:18:59 np0005481680 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 12 16:18:59 np0005481680 kernel: Console: switching to colour dummy device 80x25
Oct 12 16:18:59 np0005481680 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 12 16:18:59 np0005481680 kernel: [drm] features: -context_init
Oct 12 16:18:59 np0005481680 kernel: [drm] number of scanouts: 1
Oct 12 16:18:59 np0005481680 kernel: [drm] number of cap sets: 0
Oct 12 16:18:59 np0005481680 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 12 16:18:59 np0005481680 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 12 16:18:59 np0005481680 kernel: Console: switching to colour frame buffer device 128x48
Oct 12 16:18:59 np0005481680 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 12 16:18:59 np0005481680 chronyd[797]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 12 16:18:59 np0005481680 chronyd[797]: Loaded 0 symmetric keys
Oct 12 16:18:59 np0005481680 chronyd[797]: Using right/UTC timezone to obtain leap second data
Oct 12 16:18:59 np0005481680 chronyd[797]: Loaded seccomp filter (level 2)
Oct 12 16:18:59 np0005481680 systemd[1]: Started NTP client/server.
Oct 12 16:18:59 np0005481680 systemd-logind[783]: New seat seat0.
Oct 12 16:18:59 np0005481680 systemd-logind[783]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 12 16:18:59 np0005481680 systemd-logind[783]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 12 16:18:59 np0005481680 systemd[1]: Started User Login Management.
Oct 12 16:18:59 np0005481680 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 12 16:18:59 np0005481680 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 12 16:18:59 np0005481680 kernel: kvm_amd: TSC scaling supported
Oct 12 16:18:59 np0005481680 kernel: kvm_amd: Nested Virtualization enabled
Oct 12 16:18:59 np0005481680 kernel: kvm_amd: Nested Paging enabled
Oct 12 16:18:59 np0005481680 kernel: kvm_amd: LBR virtualization supported
Oct 12 16:18:59 np0005481680 iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Oct 12 16:18:59 np0005481680 systemd[1]: Finished IPv4 firewall with iptables.
Oct 12 16:19:00 np0005481680 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 12 Oct 2025 20:19:00 +0000. Up 6.70 seconds.
Oct 12 16:19:00 np0005481680 systemd[1]: run-cloud\x2dinit-tmp-tmp90k8jyg5.mount: Deactivated successfully.
Oct 12 16:19:00 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 16:19:00 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 16:19:00 np0005481680 systemd-hostnamed[851]: Hostname set to <np0005481680.novalocal> (static)
Oct 12 16:19:00 np0005481680 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 12 16:19:00 np0005481680 systemd[1]: Reached target Preparation for Network.
Oct 12 16:19:00 np0005481680 systemd[1]: Starting Network Manager...
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.6840] NetworkManager (version 1.54.1-1.el9) is starting... (boot:3ec8e364-c708-4309-b486-3e5f1b91e84f)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.6845] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7266] manager[0x5615f452a080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7331] hostname: hostname: using hostnamed
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7331] hostname: static hostname changed from (none) to "np0005481680.novalocal"
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7337] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7466] manager[0x5615f452a080]: rfkill: Wi-Fi hardware radio set enabled
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7468] manager[0x5615f452a080]: rfkill: WWAN hardware radio set enabled
Oct 12 16:19:00 np0005481680 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7604] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7604] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7605] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7606] manager: Networking is enabled by state file
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7608] settings: Loaded settings plugin: keyfile (internal)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7676] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7736] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 12 16:19:00 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7815] dhcp: init: Using DHCP client 'internal'
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7819] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7835] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7891] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7901] device (lo): Activation: starting connection 'lo' (1f9e4de9-da2c-46bc-932f-a03e961620a0)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7911] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7914] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:19:00 np0005481680 systemd[1]: Started Network Manager.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7953] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 12 16:19:00 np0005481680 systemd[1]: Reached target Network.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7980] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7983] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7986] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7988] device (eth0): carrier: link connected
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.7992] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8001] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8010] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 12 16:19:00 np0005481680 systemd[1]: Starting Network Manager Wait Online...
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8021] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8022] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8025] manager: NetworkManager state is now CONNECTING
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8027] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8037] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8042] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:19:00 np0005481680 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 12 16:19:00 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8094] dhcp4 (eth0): state changed new lease, address=38.102.83.111
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8103] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8125] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8130] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8132] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8136] device (lo): Activation: successful, device activated.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8143] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8144] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8146] manager: NetworkManager state is now CONNECTED_SITE
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8149] device (eth0): Activation: successful, device activated.
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8152] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 12 16:19:00 np0005481680 NetworkManager[855]: <info>  [1760300340.8154] manager: startup complete
Oct 12 16:19:00 np0005481680 systemd[1]: Finished Network Manager Wait Online.
Oct 12 16:19:00 np0005481680 systemd[1]: Starting Cloud-init: Network Stage...
Oct 12 16:19:00 np0005481680 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 12 16:19:00 np0005481680 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 12 16:19:00 np0005481680 systemd[1]: Reached target NFS client services.
Oct 12 16:19:00 np0005481680 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 12 16:19:00 np0005481680 systemd[1]: Reached target Remote File Systems.
Oct 12 16:19:00 np0005481680 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 12 16:19:01 np0005481680 cloud-init[916]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 12 Oct 2025 20:19:01 +0000. Up 7.84 seconds.
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |  eth0  | True |        38.102.83.111         | 255.255.255.0 | global | fa:16:3e:ac:93:44 |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |  eth0  | True | fe80::f816:3eff:feac:9344/64 |       .       |  link  | fa:16:3e:ac:93:44 |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 12 16:19:01 np0005481680 cloud-init[916]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 12 16:19:02 np0005481680 cloud-init[916]: Generating public/private rsa key pair.
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key fingerprint is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: SHA256:PQx1cO7Iy5Z+jouX6N6/pKRSA7NfmGXFDOlkKcBDnKc root@np0005481680.novalocal
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key's randomart image is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: +---[RSA 3072]----+
Oct 12 16:19:02 np0005481680 cloud-init[916]: |     +oo  +Bo    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |      = o.=++    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |       +.= ..    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |      E  =+o     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |       +S=* .    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |      . =..+     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |       o +*..    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |      . oBo+.    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |       +=.*=+.   |
Oct 12 16:19:02 np0005481680 cloud-init[916]: +----[SHA256]-----+
Oct 12 16:19:02 np0005481680 cloud-init[916]: Generating public/private ecdsa key pair.
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key fingerprint is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: SHA256:Y2bznw8HfT8lL9RThT2G9b3giiVDt99e6dlGpUTKTl8 root@np0005481680.novalocal
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key's randomart image is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: +---[ECDSA 256]---+
Oct 12 16:19:02 np0005481680 cloud-init[916]: |              o+.|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |             .oo=|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |         . o +. =|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |        . . *.o.E|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |        So =.++==|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |       + += +o++*|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |         .....o=+|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |           . +o.*|
Oct 12 16:19:02 np0005481680 cloud-init[916]: |            o..=.|
Oct 12 16:19:02 np0005481680 cloud-init[916]: +----[SHA256]-----+
Oct 12 16:19:02 np0005481680 cloud-init[916]: Generating public/private ed25519 key pair.
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 12 16:19:02 np0005481680 cloud-init[916]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key fingerprint is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: SHA256:VDPBijs1CxI95jWlZUkZKdbnsrAljySv4Mofufc79fM root@np0005481680.novalocal
Oct 12 16:19:02 np0005481680 cloud-init[916]: The key's randomart image is:
Oct 12 16:19:02 np0005481680 cloud-init[916]: +--[ED25519 256]--+
Oct 12 16:19:02 np0005481680 cloud-init[916]: |     .   =@*     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |    . + ==*o.    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |     + =o+ o     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |    . =.B o .    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |     . BSX o     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |    ..o = +      |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |   .o. o . .     |
Oct 12 16:19:02 np0005481680 cloud-init[916]: | .  .oo .   o    |
Oct 12 16:19:02 np0005481680 cloud-init[916]: |  ooo. .oo   oE  |
Oct 12 16:19:02 np0005481680 cloud-init[916]: +----[SHA256]-----+
Oct 12 16:19:02 np0005481680 systemd[1]: Finished Cloud-init: Network Stage.
Oct 12 16:19:02 np0005481680 systemd[1]: Reached target Cloud-config availability.
Oct 12 16:19:02 np0005481680 systemd[1]: Reached target Network is Online.
Oct 12 16:19:02 np0005481680 systemd[1]: Starting Cloud-init: Config Stage...
Oct 12 16:19:02 np0005481680 systemd[1]: Starting Notify NFS peers of a restart...
Oct 12 16:19:02 np0005481680 systemd[1]: Starting System Logging Service...
Oct 12 16:19:02 np0005481680 sm-notify[997]: Version 2.5.4 starting
Oct 12 16:19:03 np0005481680 systemd[1]: Starting OpenSSH server daemon...
Oct 12 16:19:03 np0005481680 systemd[1]: Starting Permit User Sessions...
Oct 12 16:19:03 np0005481680 systemd[1]: Started Notify NFS peers of a restart.
Oct 12 16:19:03 np0005481680 systemd[1]: Started OpenSSH server daemon.
Oct 12 16:19:03 np0005481680 systemd[1]: Finished Permit User Sessions.
Oct 12 16:19:03 np0005481680 systemd[1]: Started Command Scheduler.
Oct 12 16:19:03 np0005481680 systemd[1]: Started Getty on tty1.
Oct 12 16:19:03 np0005481680 systemd[1]: Started Serial Getty on ttyS0.
Oct 12 16:19:03 np0005481680 systemd[1]: Reached target Login Prompts.
Oct 12 16:19:03 np0005481680 rsyslogd[998]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="998" x-info="https://www.rsyslog.com"] start
Oct 12 16:19:03 np0005481680 rsyslogd[998]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 12 16:19:03 np0005481680 systemd[1]: Started System Logging Service.
Oct 12 16:19:03 np0005481680 systemd[1]: Reached target Multi-User System.
Oct 12 16:19:03 np0005481680 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 12 16:19:03 np0005481680 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 12 16:19:03 np0005481680 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 12 16:19:03 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:19:03 np0005481680 cloud-init[1012]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 12 Oct 2025 20:19:03 +0000. Up 9.92 seconds.
Oct 12 16:19:03 np0005481680 systemd[1]: Finished Cloud-init: Config Stage.
Oct 12 16:19:03 np0005481680 systemd[1]: Starting Cloud-init: Final Stage...
Oct 12 16:19:03 np0005481680 cloud-init[1016]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 12 Oct 2025 20:19:03 +0000. Up 10.30 seconds.
Oct 12 16:19:03 np0005481680 cloud-init[1021]: #############################################################
Oct 12 16:19:03 np0005481680 cloud-init[1023]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 12 16:19:03 np0005481680 cloud-init[1026]: 256 SHA256:Y2bznw8HfT8lL9RThT2G9b3giiVDt99e6dlGpUTKTl8 root@np0005481680.novalocal (ECDSA)
Oct 12 16:19:03 np0005481680 cloud-init[1029]: 256 SHA256:VDPBijs1CxI95jWlZUkZKdbnsrAljySv4Mofufc79fM root@np0005481680.novalocal (ED25519)
Oct 12 16:19:03 np0005481680 cloud-init[1032]: 3072 SHA256:PQx1cO7Iy5Z+jouX6N6/pKRSA7NfmGXFDOlkKcBDnKc root@np0005481680.novalocal (RSA)
Oct 12 16:19:03 np0005481680 cloud-init[1034]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 12 16:19:03 np0005481680 cloud-init[1035]: #############################################################
Oct 12 16:19:03 np0005481680 cloud-init[1016]: Cloud-init v. 24.4-7.el9 finished at Sun, 12 Oct 2025 20:19:03 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.53 seconds
Oct 12 16:19:03 np0005481680 systemd[1]: Finished Cloud-init: Final Stage.
Oct 12 16:19:03 np0005481680 systemd[1]: Reached target Cloud-init target.
Oct 12 16:19:03 np0005481680 systemd[1]: Startup finished in 1.637s (kernel) + 2.755s (initrd) + 6.226s (userspace) = 10.620s.
Oct 12 16:19:05 np0005481680 chronyd[797]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Oct 12 16:19:05 np0005481680 chronyd[797]: System clock TAI offset set to 37 seconds
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 25 affinity is now unmanaged
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 31 affinity is now unmanaged
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 28 affinity is now unmanaged
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 32 affinity is now unmanaged
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 30 affinity is now unmanaged
Oct 12 16:19:09 np0005481680 irqbalance[778]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 12 16:19:09 np0005481680 irqbalance[778]: IRQ 29 affinity is now unmanaged
Oct 12 16:19:11 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:19:17 np0005481680 systemd[1]: Created slice User Slice of UID 1000.
Oct 12 16:19:17 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 12 16:19:17 np0005481680 systemd-logind[783]: New session 1 of user zuul.
Oct 12 16:19:17 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 12 16:19:17 np0005481680 systemd[1]: Starting User Manager for UID 1000...
Oct 12 16:19:17 np0005481680 systemd[1054]: Queued start job for default target Main User Target.
Oct 12 16:19:17 np0005481680 systemd[1054]: Created slice User Application Slice.
Oct 12 16:19:17 np0005481680 systemd[1054]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:19:17 np0005481680 systemd[1054]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 16:19:17 np0005481680 systemd[1054]: Reached target Paths.
Oct 12 16:19:17 np0005481680 systemd[1054]: Reached target Timers.
Oct 12 16:19:17 np0005481680 systemd[1054]: Starting D-Bus User Message Bus Socket...
Oct 12 16:19:17 np0005481680 systemd[1054]: Starting Create User's Volatile Files and Directories...
Oct 12 16:19:17 np0005481680 systemd[1054]: Finished Create User's Volatile Files and Directories.
Oct 12 16:19:17 np0005481680 systemd[1054]: Listening on D-Bus User Message Bus Socket.
Oct 12 16:19:17 np0005481680 systemd[1054]: Reached target Sockets.
Oct 12 16:19:17 np0005481680 systemd[1054]: Reached target Basic System.
Oct 12 16:19:17 np0005481680 systemd[1054]: Reached target Main User Target.
Oct 12 16:19:17 np0005481680 systemd[1054]: Startup finished in 134ms.
Oct 12 16:19:17 np0005481680 systemd[1]: Started User Manager for UID 1000.
Oct 12 16:19:17 np0005481680 systemd[1]: Started Session 1 of User zuul.
Oct 12 16:19:17 np0005481680 python3[1136]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:19:20 np0005481680 python3[1164]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:19:29 np0005481680 python3[1222]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:19:30 np0005481680 python3[1262]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 12 16:19:30 np0005481680 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 12 16:19:32 np0005481680 python3[1290]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbcYFCaE4Uj7Jnw32kXhCaCjTrT37KWHAJ6SsBWsiACXfFoFLM6B+4Wg1lKyQ14zcH3wQyAHUqjVPCKWTnOQ2PmDSVTgNgG7sF+bAITZObf8q6iB6IYOSn5ZmUOlajo7DdxT1KTpPqIyY89vW3gV9V7mXYN6GoNrtQbzCW2sQecXDJQrIMSusaurOnKlM9mLyIkbEGHf4G0fIFpXvJlSDPpAsLVB0Juijgzs+DGagr2dKt6GKHO2VDoQRPUXT3NZNj8TWLV+p4J8FF0Apv90R/vMqc7jHKUVCRwDZ+ZnAjDyYnFdW1nNNXoDLgU+d4TAvb95XI7387BJUTlo0uLupFwo4ALW5/7Gatm/fWuVsSjXp3W5EDgQ18GeR5naQNdNilxPVmUbbIi8qqq/eQ/hUFb1Jak11yh8svEXYy4CMEeWJCROFaBUNtr81SFw/ExHTOVokPkwXd3W7pGzgj8QxXmsq4kRVFJ8woqJ6uZmS7Yii8aB8uotpTj9UZZsrDGj0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:33 np0005481680 python3[1314]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:33 np0005481680 python3[1413]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:33 np0005481680 python3[1484]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760300373.2655773-251-201951947206559/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cbb9f0f6b81a42e6804b863f49f37376_id_rsa follow=False checksum=058ea436c003b925cd339d10f680d1c47254912f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:34 np0005481680 python3[1607]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:35 np0005481680 python3[1678]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760300374.3267393-306-43677173591196/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cbb9f0f6b81a42e6804b863f49f37376_id_rsa.pub follow=False checksum=a0b002a47ee388de345d8f4a0c6d5f95e28a0e19 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:36 np0005481680 python3[1726]: ansible-ping Invoked with data=pong
Oct 12 16:19:37 np0005481680 python3[1750]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:19:39 np0005481680 python3[1808]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 12 16:19:40 np0005481680 python3[1840]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:40 np0005481680 python3[1864]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:41 np0005481680 python3[1888]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:41 np0005481680 python3[1912]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:41 np0005481680 python3[1936]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:41 np0005481680 python3[1960]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:43 np0005481680 python3[1986]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:44 np0005481680 python3[2064]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:44 np0005481680 python3[2137]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760300383.9111369-31-64075616298049/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:45 np0005481680 python3[2185]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:45 np0005481680 python3[2209]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:46 np0005481680 python3[2233]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:46 np0005481680 python3[2257]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:46 np0005481680 python3[2281]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:46 np0005481680 python3[2305]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:47 np0005481680 python3[2329]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:47 np0005481680 python3[2353]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:47 np0005481680 python3[2377]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:47 np0005481680 python3[2401]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:48 np0005481680 python3[2425]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:48 np0005481680 python3[2449]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:48 np0005481680 python3[2473]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:49 np0005481680 python3[2497]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:49 np0005481680 python3[2521]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:49 np0005481680 python3[2545]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:50 np0005481680 python3[2569]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:50 np0005481680 python3[2593]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:50 np0005481680 python3[2617]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:51 np0005481680 python3[2641]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:51 np0005481680 python3[2665]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:51 np0005481680 python3[2689]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:51 np0005481680 python3[2713]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:52 np0005481680 python3[2737]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:52 np0005481680 python3[2761]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:52 np0005481680 python3[2785]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:19:54 np0005481680 python3[2811]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 12 16:19:54 np0005481680 systemd[1]: Starting Time & Date Service...
Oct 12 16:19:54 np0005481680 systemd[1]: Started Time & Date Service.
Oct 12 16:19:54 np0005481680 systemd-timedated[2813]: Changed time zone to 'UTC' (UTC).
Oct 12 16:19:56 np0005481680 python3[2842]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:56 np0005481680 python3[2918]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:57 np0005481680 python3[2989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760300396.4935374-251-25827123063019/source _original_basename=tmpflt968wd follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:57 np0005481680 python3[3089]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:57 np0005481680 python3[3160]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760300397.3595467-301-52722958840977/source _original_basename=tmp2yvasm47 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:58 np0005481680 python3[3262]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:19:59 np0005481680 python3[3335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760300398.5473914-381-233369889364814/source _original_basename=tmp64jb3mk7 follow=False checksum=634f92f67c90daca0d0661ff9e082945cbba2c1b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:19:59 np0005481680 python3[3383]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:20:00 np0005481680 python3[3409]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:20:00 np0005481680 python3[3489]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:20:00 np0005481680 python3[3562]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760300400.2096035-451-111264854501637/source _original_basename=tmpfmp3moyx follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:20:01 np0005481680 python3[3613]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-0dd0-7499-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:20:02 np0005481680 python3[3641]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-0dd0-7499-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 12 16:20:03 np0005481680 python3[3669]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:20:09 np0005481680 irqbalance[778]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 12 16:20:09 np0005481680 irqbalance[778]: IRQ 26 affinity is now unmanaged
Oct 12 16:20:22 np0005481680 python3[3695]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:20:24 np0005481680 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 12 16:21:02 np0005481680 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 12 16:21:02 np0005481680 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.1970] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 12 16:21:02 np0005481680 systemd-udevd[3698]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2145] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2169] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2172] device (eth1): carrier: link connected
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2173] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2178] policy: auto-activating connection 'Wired connection 1' (38704a7a-1cf9-3e21-9c3b-42fb6d65a758)
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2182] device (eth1): Activation: starting connection 'Wired connection 1' (38704a7a-1cf9-3e21-9c3b-42fb6d65a758)
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2183] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2185] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2188] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:21:02 np0005481680 NetworkManager[855]: <info>  [1760300462.2191] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:21:03 np0005481680 python3[3725]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-936d-75f8-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:21:13 np0005481680 python3[3805]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:21:13 np0005481680 python3[3878]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760300473.0092523-104-281097132348514/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=92862b0da9f5599e05233441b79cf15b2ebe2348 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:21:14 np0005481680 python3[3928]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:21:14 np0005481680 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 12 16:21:14 np0005481680 systemd[1]: Stopped Network Manager Wait Online.
Oct 12 16:21:14 np0005481680 systemd[1]: Stopping Network Manager Wait Online...
Oct 12 16:21:14 np0005481680 systemd[1]: Stopping Network Manager...
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6258] caught SIGTERM, shutting down normally.
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6270] dhcp4 (eth0): canceled DHCP transaction
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6271] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6271] dhcp4 (eth0): state changed no lease
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6275] manager: NetworkManager state is now CONNECTING
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6371] dhcp4 (eth1): canceled DHCP transaction
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6372] dhcp4 (eth1): state changed no lease
Oct 12 16:21:14 np0005481680 NetworkManager[855]: <info>  [1760300474.6435] exiting (success)
Oct 12 16:21:14 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:21:14 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:21:14 np0005481680 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 12 16:21:14 np0005481680 systemd[1]: Stopped Network Manager.
Oct 12 16:21:14 np0005481680 systemd[1]: NetworkManager.service: Consumed 1.056s CPU time, 10.0M memory peak.
Oct 12 16:21:14 np0005481680 systemd[1]: Starting Network Manager...
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.6954] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3ec8e364-c708-4309-b486-3e5f1b91e84f)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.6956] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7015] manager[0x55a108688070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 12 16:21:14 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 16:21:14 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7975] hostname: hostname: using hostnamed
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7977] hostname: static hostname changed from (none) to "np0005481680.novalocal"
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7985] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7992] manager[0x55a108688070]: rfkill: Wi-Fi hardware radio set enabled
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.7992] manager[0x55a108688070]: rfkill: WWAN hardware radio set enabled
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8041] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8042] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8043] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8043] manager: Networking is enabled by state file
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8047] settings: Loaded settings plugin: keyfile (internal)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8053] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8095] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8112] dhcp: init: Using DHCP client 'internal'
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8116] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8124] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8134] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8146] device (lo): Activation: starting connection 'lo' (1f9e4de9-da2c-46bc-932f-a03e961620a0)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8157] device (eth0): carrier: link connected
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8164] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8172] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8173] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8183] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8194] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8204] device (eth1): carrier: link connected
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8211] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8218] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (38704a7a-1cf9-3e21-9c3b-42fb6d65a758) (indicated)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8219] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8228] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8238] device (eth1): Activation: starting connection 'Wired connection 1' (38704a7a-1cf9-3e21-9c3b-42fb6d65a758)
Oct 12 16:21:14 np0005481680 systemd[1]: Started Network Manager.
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8247] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8253] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8257] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8259] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8263] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8267] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8271] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8275] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8280] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8290] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8294] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8306] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8310] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8340] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8349] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8359] device (lo): Activation: successful, device activated.
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8370] dhcp4 (eth0): state changed new lease, address=38.102.83.111
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8386] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 12 16:21:14 np0005481680 systemd[1]: Starting Network Manager Wait Online...
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8452] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8475] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8476] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8478] manager: NetworkManager state is now CONNECTED_SITE
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8481] device (eth0): Activation: successful, device activated.
Oct 12 16:21:14 np0005481680 NetworkManager[3937]: <info>  [1760300474.8485] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 12 16:21:15 np0005481680 python3[4012]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-936d-75f8-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:21:22 np0005481680 systemd[1054]: Starting Mark boot as successful...
Oct 12 16:21:22 np0005481680 systemd[1054]: Finished Mark boot as successful.
Oct 12 16:21:24 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:21:44 np0005481680 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3149] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 12 16:22:00 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:22:00 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3383] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3387] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3402] device (eth1): Activation: successful, device activated.
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3413] manager: startup complete
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3419] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <warn>  [1760300520.3427] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3439] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 systemd[1]: Finished Network Manager Wait Online.
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3551] dhcp4 (eth1): canceled DHCP transaction
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3552] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3552] dhcp4 (eth1): state changed no lease
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3574] policy: auto-activating connection 'ci-private-network' (8fefafd4-aabf-52b9-842a-497d64cc3f86)
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3581] device (eth1): Activation: starting connection 'ci-private-network' (8fefafd4-aabf-52b9-842a-497d64cc3f86)
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3583] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3586] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3596] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3609] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3664] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3666] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:22:00 np0005481680 NetworkManager[3937]: <info>  [1760300520.3675] device (eth1): Activation: successful, device activated.
Oct 12 16:22:10 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:22:15 np0005481680 systemd-logind[783]: Session 1 logged out. Waiting for processes to exit.
Oct 12 16:23:13 np0005481680 systemd-logind[783]: New session 3 of user zuul.
Oct 12 16:23:14 np0005481680 systemd[1]: Started Session 3 of User zuul.
Oct 12 16:23:14 np0005481680 python3[4124]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:23:14 np0005481680 python3[4197]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760300594.1272895-373-125923954115990/source _original_basename=tmphwpfkat9 follow=False checksum=63c2af0004216c08eb8a55f130450b6e43b0efbe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:23:18 np0005481680 systemd[1]: session-3.scope: Deactivated successfully.
Oct 12 16:23:18 np0005481680 systemd-logind[783]: Session 3 logged out. Waiting for processes to exit.
Oct 12 16:23:18 np0005481680 systemd-logind[783]: Removed session 3.
Oct 12 16:24:22 np0005481680 systemd[1054]: Created slice User Background Tasks Slice.
Oct 12 16:24:22 np0005481680 systemd[1054]: Starting Cleanup of User's Temporary Files and Directories...
Oct 12 16:24:22 np0005481680 systemd[1054]: Finished Cleanup of User's Temporary Files and Directories.
Oct 12 16:28:52 np0005481680 systemd-logind[783]: New session 4 of user zuul.
Oct 12 16:28:52 np0005481680 systemd[1]: Started Session 4 of User zuul.
Oct 12 16:28:52 np0005481680 python3[4256]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-a63e-ab13-000000001cfe-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:52 np0005481680 python3[4285]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:28:53 np0005481680 python3[4311]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:28:53 np0005481680 python3[4337]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:28:53 np0005481680 python3[4363]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:28:54 np0005481680 python3[4389]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:28:54 np0005481680 python3[4389]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 12 16:28:55 np0005481680 python3[4415]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 16:28:55 np0005481680 systemd[1]: Reloading.
Oct 12 16:28:55 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:28:56 np0005481680 python3[4471]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 12 16:28:57 np0005481680 python3[4497]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:57 np0005481680 python3[4525]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:58 np0005481680 python3[4553]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:58 np0005481680 python3[4581]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:58 np0005481680 python3[4608]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-a63e-ab13-000000001d04-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:28:59 np0005481680 python3[4638]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:29:02 np0005481680 systemd[1]: session-4.scope: Deactivated successfully.
Oct 12 16:29:02 np0005481680 systemd[1]: session-4.scope: Consumed 3.608s CPU time.
Oct 12 16:29:02 np0005481680 systemd-logind[783]: Session 4 logged out. Waiting for processes to exit.
Oct 12 16:29:02 np0005481680 systemd-logind[783]: Removed session 4.
Oct 12 16:29:04 np0005481680 systemd-logind[783]: New session 5 of user zuul.
Oct 12 16:29:04 np0005481680 systemd[1]: Started Session 5 of User zuul.
Oct 12 16:29:04 np0005481680 python3[4672]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 12 16:29:19 np0005481680 kernel: SELinux:  Converting 363 SID table entries...
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:29:19 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  Converting 363 SID table entries...
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:29:28 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  Converting 363 SID table entries...
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:29:36 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:29:37 np0005481680 setsebool[4735]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 12 16:29:37 np0005481680 setsebool[4735]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 12 16:29:53 np0005481680 kernel: SELinux:  Converting 366 SID table entries...
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:29:53 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:30:13 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 12 16:30:13 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:30:13 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:30:13 np0005481680 systemd[1]: Reloading.
Oct 12 16:30:13 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:30:14 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:30:14 np0005481680 systemd[1]: Starting PackageKit Daemon...
Oct 12 16:30:14 np0005481680 systemd[1]: Starting Authorization Manager...
Oct 12 16:30:14 np0005481680 polkitd[6190]: Started polkitd version 0.117
Oct 12 16:30:15 np0005481680 systemd[1]: Started Authorization Manager.
Oct 12 16:30:15 np0005481680 systemd[1]: Started PackageKit Daemon.
Oct 12 16:30:16 np0005481680 python3[7129]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-7c55-d342-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:30:16 np0005481680 kernel: evm: overlay not supported
Oct 12 16:30:16 np0005481680 systemd[1054]: Starting D-Bus User Message Bus...
Oct 12 16:30:16 np0005481680 dbus-broker-launch[8131]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 12 16:30:16 np0005481680 dbus-broker-launch[8131]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 12 16:30:16 np0005481680 systemd[1054]: Started D-Bus User Message Bus.
Oct 12 16:30:16 np0005481680 dbus-broker-lau[8131]: Ready
Oct 12 16:30:16 np0005481680 systemd[1054]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 12 16:30:16 np0005481680 systemd[1054]: Created slice Slice /user.
Oct 12 16:30:16 np0005481680 systemd[1054]: podman-7978.scope: unit configures an IP firewall, but not running as root.
Oct 12 16:30:16 np0005481680 systemd[1054]: (This warning is only shown for the first unit using IP firewalling.)
Oct 12 16:30:16 np0005481680 systemd[1054]: Started podman-7978.scope.
Oct 12 16:30:17 np0005481680 systemd[1054]: Started podman-pause-5d45cbf0.scope.
Oct 12 16:30:17 np0005481680 systemd[1]: session-5.scope: Deactivated successfully.
Oct 12 16:30:17 np0005481680 systemd[1]: session-5.scope: Consumed 58.397s CPU time.
Oct 12 16:30:17 np0005481680 systemd-logind[783]: Session 5 logged out. Waiting for processes to exit.
Oct 12 16:30:17 np0005481680 systemd-logind[783]: Removed session 5.
Oct 12 16:30:29 np0005481680 irqbalance[778]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 12 16:30:29 np0005481680 irqbalance[778]: IRQ 27 affinity is now unmanaged
Oct 12 16:30:38 np0005481680 systemd-logind[783]: New session 6 of user zuul.
Oct 12 16:30:38 np0005481680 systemd[1]: Started Session 6 of User zuul.
Oct 12 16:30:38 np0005481680 python3[17598]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIFVlDqbsvn2OAWWdroCAtdF+c6+ovTtB7HtAyo5lCRFQwn5SlzQxJTN31VgoMjJatZEJHsz+KFadJryiq+UICs= zuul@np0005481679.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:30:39 np0005481680 python3[17810]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIFVlDqbsvn2OAWWdroCAtdF+c6+ovTtB7HtAyo5lCRFQwn5SlzQxJTN31VgoMjJatZEJHsz+KFadJryiq+UICs= zuul@np0005481679.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:30:40 np0005481680 python3[18213]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005481680.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 12 16:30:40 np0005481680 python3[18451]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIFVlDqbsvn2OAWWdroCAtdF+c6+ovTtB7HtAyo5lCRFQwn5SlzQxJTN31VgoMjJatZEJHsz+KFadJryiq+UICs= zuul@np0005481679.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 12 16:30:41 np0005481680 python3[18727]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:30:41 np0005481680 python3[18968]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760301040.9168813-150-254075833340105/source _original_basename=tmpx5bjhr6a follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:30:42 np0005481680 python3[19269]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 12 16:30:42 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 16:30:42 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 16:30:42 np0005481680 systemd-hostnamed[19363]: Changed pretty hostname to 'compute-0'
Oct 12 16:30:42 np0005481680 systemd-hostnamed[19363]: Hostname set to <compute-0> (static)
Oct 12 16:30:42 np0005481680 NetworkManager[3937]: <info>  [1760301042.7051] hostname: static hostname changed from "np0005481680.novalocal" to "compute-0"
Oct 12 16:30:42 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:30:42 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:30:43 np0005481680 systemd[1]: session-6.scope: Deactivated successfully.
Oct 12 16:30:43 np0005481680 systemd[1]: session-6.scope: Consumed 2.350s CPU time.
Oct 12 16:30:43 np0005481680 systemd-logind[783]: Session 6 logged out. Waiting for processes to exit.
Oct 12 16:30:43 np0005481680 systemd-logind[783]: Removed session 6.
Oct 12 16:30:52 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:31:02 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:31:02 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:31:02 np0005481680 systemd[1]: man-db-cache-update.service: Consumed 58.216s CPU time.
Oct 12 16:31:02 np0005481680 systemd[1]: run-ra81e7d22b289448d8fe1b25b684884cb.service: Deactivated successfully.
Oct 12 16:31:12 np0005481680 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 12 16:34:12 np0005481680 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 12 16:34:12 np0005481680 systemd-logind[783]: New session 7 of user zuul.
Oct 12 16:34:12 np0005481680 systemd[1]: Started Session 7 of User zuul.
Oct 12 16:34:12 np0005481680 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 12 16:34:12 np0005481680 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 12 16:34:12 np0005481680 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 12 16:34:12 np0005481680 python3[26591]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:34:14 np0005481680 python3[26707]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:15 np0005481680 python3[26780]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=delorean.repo follow=False checksum=f3fabc627b4c59ab3d10213193ffdeeed080e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:15 np0005481680 python3[26806]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:15 np0005481680 python3[26879]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:16 np0005481680 python3[26905]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:16 np0005481680 python3[26978]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:16 np0005481680 python3[27004]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:17 np0005481680 python3[27077]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:17 np0005481680 python3[27103]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:17 np0005481680 python3[27176]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:17 np0005481680 python3[27202]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:18 np0005481680 python3[27277]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:18 np0005481680 python3[27303]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:34:18 np0005481680 python3[27376]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760301254.4576201-30569-105764698530682/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=5e44558a2b46929660a6b5bfc8824fb4521580a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:34:30 np0005481680 python3[27434]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:35:20 np0005481680 systemd[1]: packagekit.service: Deactivated successfully.
Oct 12 16:39:30 np0005481680 systemd[1]: session-7.scope: Deactivated successfully.
Oct 12 16:39:30 np0005481680 systemd[1]: session-7.scope: Consumed 5.225s CPU time.
Oct 12 16:39:30 np0005481680 systemd-logind[783]: Session 7 logged out. Waiting for processes to exit.
Oct 12 16:39:30 np0005481680 systemd-logind[783]: Removed session 7.
Oct 12 16:44:45 np0005481680 systemd-logind[783]: New session 8 of user zuul.
Oct 12 16:44:45 np0005481680 systemd[1]: Started Session 8 of User zuul.
Oct 12 16:44:46 np0005481680 python3.9[27595]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:44:48 np0005481680 python3.9[27776]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:44:55 np0005481680 systemd[1]: session-8.scope: Deactivated successfully.
Oct 12 16:44:55 np0005481680 systemd[1]: session-8.scope: Consumed 7.635s CPU time.
Oct 12 16:44:55 np0005481680 systemd-logind[783]: Session 8 logged out. Waiting for processes to exit.
Oct 12 16:44:55 np0005481680 systemd-logind[783]: Removed session 8.
Oct 12 16:45:11 np0005481680 systemd-logind[783]: New session 9 of user zuul.
Oct 12 16:45:11 np0005481680 systemd[1]: Started Session 9 of User zuul.
Oct 12 16:45:12 np0005481680 python3.9[27986]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 12 16:45:13 np0005481680 python3.9[28160]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:45:14 np0005481680 python3.9[28312]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:45:15 np0005481680 python3.9[28465]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:45:16 np0005481680 python3.9[28617]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:45:17 np0005481680 python3.9[28769]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:45:17 np0005481680 python3.9[28892]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760301916.8049145-177-12317732262439/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:45:18 np0005481680 python3.9[29044]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:45:19 np0005481680 python3.9[29200]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:45:20 np0005481680 python3.9[29350]: ansible-ansible.builtin.service_facts Invoked
Oct 12 16:45:24 np0005481680 python3.9[29605]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:45:25 np0005481680 python3.9[29755]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:45:26 np0005481680 python3.9[29909]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:45:27 np0005481680 python3.9[30067]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:45:28 np0005481680 python3.9[30151]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:46:10 np0005481680 systemd[1]: Reloading.
Oct 12 16:46:10 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:46:10 np0005481680 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 12 16:46:11 np0005481680 systemd[1]: Reloading.
Oct 12 16:46:11 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:46:11 np0005481680 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 12 16:46:11 np0005481680 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 12 16:46:11 np0005481680 systemd[1]: Reloading.
Oct 12 16:46:11 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:46:11 np0005481680 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 12 16:46:11 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 16:46:11 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 16:47:14 np0005481680 kernel: SELinux:  Converting 2713 SID table entries...
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:47:14 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:47:14 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 12 16:47:15 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:47:15 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:47:15 np0005481680 systemd[1]: Reloading.
Oct 12 16:47:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:47:15 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:47:15 np0005481680 systemd[1]: Starting PackageKit Daemon...
Oct 12 16:47:15 np0005481680 systemd[1]: Started PackageKit Daemon.
Oct 12 16:47:16 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:47:16 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:47:16 np0005481680 systemd[1]: man-db-cache-update.service: Consumed 1.306s CPU time.
Oct 12 16:47:16 np0005481680 systemd[1]: run-r8ac1cd0176234dd1bf9cf82b89957c3a.service: Deactivated successfully.
Oct 12 16:47:16 np0005481680 python3.9[31663]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:47:18 np0005481680 python3.9[31944]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 12 16:47:19 np0005481680 python3.9[32096]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 12 16:47:21 np0005481680 python3.9[32249]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:47:23 np0005481680 python3.9[32401]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 12 16:47:24 np0005481680 python3.9[32553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:47:25 np0005481680 python3.9[32705]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:47:26 np0005481680 python3.9[32828]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302044.9310033-639-69982549638596/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:47:29 np0005481680 python3.9[32981]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 12 16:47:30 np0005481680 python3.9[33134]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 16:47:32 np0005481680 python3.9[33292]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 12 16:47:32 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:47:32 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:47:33 np0005481680 python3.9[33453]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 12 16:47:33 np0005481680 python3.9[33606]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 16:47:34 np0005481680 python3.9[33764]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 12 16:47:35 np0005481680 python3.9[33916]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:47:38 np0005481680 python3.9[34069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:47:38 np0005481680 python3.9[34221]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:47:39 np0005481680 python3.9[34344]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760302058.4620545-924-122334597598217/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:47:40 np0005481680 python3.9[34496]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:47:40 np0005481680 systemd[1]: Starting Load Kernel Modules...
Oct 12 16:47:41 np0005481680 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 12 16:47:41 np0005481680 kernel: Bridge firewalling registered
Oct 12 16:47:41 np0005481680 systemd-modules-load[34500]: Inserted module 'br_netfilter'
Oct 12 16:47:41 np0005481680 systemd[1]: Finished Load Kernel Modules.
Oct 12 16:47:41 np0005481680 python3.9[34655]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:47:42 np0005481680 python3.9[34778]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760302061.3247766-993-34828683503190/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:47:43 np0005481680 python3.9[34930]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:47:46 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 16:47:46 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 16:47:46 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:47:46 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:47:46 np0005481680 systemd[1]: Reloading.
Oct 12 16:47:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:47:47 np0005481680 systemd[1]: Starting dnf makecache...
Oct 12 16:47:47 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:47:47 np0005481680 dnf[35004]: Failed determining last makecache time.
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-barbican-42b4c41831408a8e323 113 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 155 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-cinder-1c00d6490d88e436f26ef 145 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-stevedore-c4acc5639fd2329372142 151 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-observabilityclient-2f31846d73c 148 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-diskimage-builder-7d793e664cf892461c55 146 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 163 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-designate-tests-tempest-347fdbc 149 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-glance-1fd12c29b339f30fe823e 138 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 161 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-manila-3c01b7181572c95dac462 138 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-vmware-nsxlib-458234972d1428ac9 143 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-octavia-ba397f07a7331190208c 151 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-watcher-c014f81a8647287f6dcc 145 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d3 146 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-puppet-ceph-91ba84bc002c318a7f961d084e 163 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-swift-dc98a8463506ac520c469a 156 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-python-tempestconf-8515371b7cceebd4282 143 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: delorean-openstack-heat-ui-013accbfd179753bc3f0 107 kB/s | 3.0 kB     00:00
Oct 12 16:47:47 np0005481680 dnf[35004]: CentOS Stream 9 - BaseOS                         56 kB/s | 6.7 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: CentOS Stream 9 - AppStream                      62 kB/s | 6.8 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: CentOS Stream 9 - CRB                            43 kB/s | 6.6 kB     00:00
Oct 12 16:47:48 np0005481680 python3.9[36358]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:47:48 np0005481680 dnf[35004]: CentOS Stream 9 - Extras packages                66 kB/s | 8.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: dlrn-antelope-testing                            89 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: dlrn-antelope-build-deps                        100 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: centos9-rabbitmq                                 76 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: centos9-storage                                  86 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: centos9-opstools                                 23 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: NFV SIG OpenvSwitch                              68 kB/s | 3.0 kB     00:00
Oct 12 16:47:48 np0005481680 dnf[35004]: repo-setup-centos-appstream                      83 kB/s | 4.4 kB     00:00
Oct 12 16:47:49 np0005481680 dnf[35004]: repo-setup-centos-baseos                        136 kB/s | 3.9 kB     00:00
Oct 12 16:47:49 np0005481680 dnf[35004]: repo-setup-centos-highavailability              142 kB/s | 3.9 kB     00:00
Oct 12 16:47:49 np0005481680 dnf[35004]: repo-setup-centos-powertools                    176 kB/s | 4.3 kB     00:00
Oct 12 16:47:49 np0005481680 dnf[35004]: Extra Packages for Enterprise Linux 9 - x86_64  280 kB/s |  34 kB     00:00
Oct 12 16:47:49 np0005481680 python3.9[37311]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 12 16:47:49 np0005481680 dnf[35004]: Metadata cache created.
Oct 12 16:47:49 np0005481680 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 12 16:47:49 np0005481680 systemd[1]: Finished dnf makecache.
Oct 12 16:47:49 np0005481680 systemd[1]: dnf-makecache.service: Consumed 1.767s CPU time.
Oct 12 16:47:50 np0005481680 python3.9[38116]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:47:51 np0005481680 python3.9[38982]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:47:51 np0005481680 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 12 16:47:51 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:47:51 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:47:51 np0005481680 systemd[1]: man-db-cache-update.service: Consumed 5.450s CPU time.
Oct 12 16:47:51 np0005481680 systemd[1]: run-rf712bc84201948b8ab33827806f3e3ac.service: Deactivated successfully.
Oct 12 16:47:51 np0005481680 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 12 16:47:52 np0005481680 python3.9[39519]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:47:52 np0005481680 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 12 16:47:52 np0005481680 systemd[1]: tuned.service: Deactivated successfully.
Oct 12 16:47:52 np0005481680 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 12 16:47:52 np0005481680 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 12 16:47:52 np0005481680 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 12 16:47:53 np0005481680 python3.9[39680]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 12 16:47:57 np0005481680 python3.9[39832]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:47:57 np0005481680 systemd[1]: Reloading.
Oct 12 16:47:57 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:47:58 np0005481680 python3.9[40021]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:47:59 np0005481680 systemd[1]: Reloading.
Oct 12 16:47:59 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:48:00 np0005481680 python3.9[40210]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:48:00 np0005481680 python3.9[40363]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:48:00 np0005481680 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 12 16:48:01 np0005481680 python3.9[40516]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:48:03 np0005481680 python3.9[40678]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:48:04 np0005481680 python3.9[40831]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:48:04 np0005481680 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 12 16:48:04 np0005481680 systemd[1]: Stopped Apply Kernel Variables.
Oct 12 16:48:04 np0005481680 systemd[1]: Stopping Apply Kernel Variables...
Oct 12 16:48:04 np0005481680 systemd[1]: Starting Apply Kernel Variables...
Oct 12 16:48:04 np0005481680 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 12 16:48:04 np0005481680 systemd[1]: Finished Apply Kernel Variables.
Oct 12 16:48:05 np0005481680 systemd[1]: session-9.scope: Deactivated successfully.
Oct 12 16:48:05 np0005481680 systemd[1]: session-9.scope: Consumed 2min 7.950s CPU time.
Oct 12 16:48:05 np0005481680 systemd-logind[783]: Session 9 logged out. Waiting for processes to exit.
Oct 12 16:48:05 np0005481680 systemd-logind[783]: Removed session 9.
Oct 12 16:48:10 np0005481680 systemd-logind[783]: New session 10 of user zuul.
Oct 12 16:48:10 np0005481680 systemd[1]: Started Session 10 of User zuul.
Oct 12 16:48:11 np0005481680 python3.9[41014]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:48:12 np0005481680 python3.9[41170]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 12 16:48:13 np0005481680 python3.9[41323]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 16:48:14 np0005481680 python3.9[41481]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 12 16:48:15 np0005481680 python3.9[41641]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:48:16 np0005481680 python3.9[41725]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 12 16:48:19 np0005481680 python3.9[41888]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:48:30 np0005481680 kernel: SELinux:  Converting 2724 SID table entries...
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:48:30 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:48:30 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 12 16:48:30 np0005481680 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 12 16:48:31 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:48:31 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:48:31 np0005481680 systemd[1]: Reloading.
Oct 12 16:48:31 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:48:31 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:48:32 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:48:32 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:48:32 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:48:32 np0005481680 systemd[1]: run-rcd3bc71c706b499c95df31ea67aba56b.service: Deactivated successfully.
Oct 12 16:48:33 np0005481680 python3.9[42989]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 16:48:33 np0005481680 systemd[1]: Reloading.
Oct 12 16:48:33 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:48:33 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:48:34 np0005481680 systemd[1]: Starting Open vSwitch Database Unit...
Oct 12 16:48:34 np0005481680 chown[43031]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 12 16:48:34 np0005481680 ovs-ctl[43036]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 12 16:48:34 np0005481680 ovs-ctl[43036]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 12 16:48:34 np0005481680 ovs-ctl[43036]: Starting ovsdb-server [  OK  ]
Oct 12 16:48:34 np0005481680 ovs-vsctl[43085]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 12 16:48:34 np0005481680 ovs-vsctl[43104]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"4fd585ac-c8a3-45e9-b563-f151bc390e2e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 12 16:48:34 np0005481680 ovs-ctl[43036]: Configuring Open vSwitch system IDs [  OK  ]
Oct 12 16:48:34 np0005481680 ovs-ctl[43036]: Enabling remote OVSDB managers [  OK  ]
Oct 12 16:48:34 np0005481680 systemd[1]: Started Open vSwitch Database Unit.
Oct 12 16:48:34 np0005481680 ovs-vsctl[43110]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 12 16:48:34 np0005481680 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 12 16:48:34 np0005481680 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 12 16:48:34 np0005481680 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 12 16:48:34 np0005481680 kernel: openvswitch: Open vSwitch switching datapath
Oct 12 16:48:34 np0005481680 ovs-ctl[43155]: Inserting openvswitch module [  OK  ]
Oct 12 16:48:34 np0005481680 ovs-ctl[43124]: Starting ovs-vswitchd [  OK  ]
Oct 12 16:48:34 np0005481680 ovs-vsctl[43172]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 12 16:48:34 np0005481680 ovs-ctl[43124]: Enabling remote OVSDB managers [  OK  ]
Oct 12 16:48:34 np0005481680 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 12 16:48:34 np0005481680 systemd[1]: Starting Open vSwitch...
Oct 12 16:48:34 np0005481680 systemd[1]: Finished Open vSwitch.
Oct 12 16:48:35 np0005481680 python3.9[43324]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:48:36 np0005481680 python3.9[43476]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 12 16:48:37 np0005481680 kernel: SELinux:  Converting 2738 SID table entries...
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 16:48:37 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 16:48:39 np0005481680 python3.9[43631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:48:39 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 12 16:48:40 np0005481680 python3.9[43789]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:48:42 np0005481680 python3.9[43942]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:48:43 np0005481680 python3.9[44229]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 12 16:48:44 np0005481680 python3.9[44379]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:48:45 np0005481680 python3.9[44533]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:48:47 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:48:47 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:48:47 np0005481680 systemd[1]: Reloading.
Oct 12 16:48:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:48:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:48:47 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:48:47 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:48:47 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:48:47 np0005481680 systemd[1]: run-rf95495cafa2b4037945c83301d51d29f.service: Deactivated successfully.
Oct 12 16:48:48 np0005481680 python3.9[44850]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:48:48 np0005481680 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 12 16:48:48 np0005481680 systemd[1]: Stopped Network Manager Wait Online.
Oct 12 16:48:48 np0005481680 systemd[1]: Stopping Network Manager Wait Online...
Oct 12 16:48:48 np0005481680 systemd[1]: Stopping Network Manager...
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5512] caught SIGTERM, shutting down normally.
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5526] dhcp4 (eth0): canceled DHCP transaction
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5527] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5527] dhcp4 (eth0): state changed no lease
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5530] manager: NetworkManager state is now CONNECTED_SITE
Oct 12 16:48:48 np0005481680 NetworkManager[3937]: <info>  [1760302128.5586] exiting (success)
Oct 12 16:48:48 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:48:48 np0005481680 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 12 16:48:48 np0005481680 systemd[1]: Stopped Network Manager.
Oct 12 16:48:48 np0005481680 systemd[1]: NetworkManager.service: Consumed 10.888s CPU time, 4.1M memory peak, read 0B from disk, written 33.5K to disk.
Oct 12 16:48:48 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:48:48 np0005481680 systemd[1]: Starting Network Manager...
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6120] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3ec8e364-c708-4309-b486-3e5f1b91e84f)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6121] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6179] manager[0x5607e7c33090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 12 16:48:48 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 16:48:48 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6953] hostname: hostname: using hostnamed
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6955] hostname: static hostname changed from (none) to "compute-0"
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6958] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6962] manager[0x5607e7c33090]: rfkill: Wi-Fi hardware radio set enabled
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6962] manager[0x5607e7c33090]: rfkill: WWAN hardware radio set enabled
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6978] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6985] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6985] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6986] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6986] manager: Networking is enabled by state file
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6987] settings: Loaded settings plugin: keyfile (internal)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.6990] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7009] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7015] dhcp: init: Using DHCP client 'internal'
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7017] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7021] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7024] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7029] device (lo): Activation: starting connection 'lo' (1f9e4de9-da2c-46bc-932f-a03e961620a0)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7034] device (eth0): carrier: link connected
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7036] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7039] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7040] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7044] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7048] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7052] device (eth1): carrier: link connected
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7055] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7058] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8fefafd4-aabf-52b9-842a-497d64cc3f86) (indicated)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7058] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7062] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7067] device (eth1): Activation: starting connection 'ci-private-network' (8fefafd4-aabf-52b9-842a-497d64cc3f86)
Oct 12 16:48:48 np0005481680 systemd[1]: Started Network Manager.
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7081] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7090] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7092] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7093] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7095] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7097] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7099] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7101] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7103] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7109] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7112] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7118] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7129] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7137] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7138] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7143] device (lo): Activation: successful, device activated.
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7149] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7149] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7152] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7154] device (eth1): Activation: successful, device activated.
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7161] dhcp4 (eth0): state changed new lease, address=38.102.83.111
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7167] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 12 16:48:48 np0005481680 systemd[1]: Starting Network Manager Wait Online...
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7225] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7240] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7241] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7246] manager: NetworkManager state is now CONNECTED_SITE
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7249] device (eth0): Activation: successful, device activated.
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7254] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 12 16:48:48 np0005481680 NetworkManager[44859]: <info>  [1760302128.7255] manager: startup complete
Oct 12 16:48:48 np0005481680 systemd[1]: Finished Network Manager Wait Online.
Oct 12 16:48:49 np0005481680 python3.9[45076]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:48:54 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:48:54 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:48:54 np0005481680 systemd[1]: Reloading.
Oct 12 16:48:54 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:48:54 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:48:54 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 16:48:55 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:48:55 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:48:55 np0005481680 systemd[1]: run-r8ad34c5937b74cdeb36a2ca26d475285.service: Deactivated successfully.
Oct 12 16:48:56 np0005481680 python3.9[45541]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:48:57 np0005481680 python3.9[45693]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:48:58 np0005481680 python3.9[45847]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:48:58 np0005481680 python3.9[45999]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:48:58 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:48:59 np0005481680 python3.9[46151]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:00 np0005481680 python3.9[46303]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:01 np0005481680 python3.9[46455]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:49:01 np0005481680 python3.9[46578]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302140.570901-647-186549707056698/.source _original_basename=.0j_iqtfz follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:02 np0005481680 python3.9[46730]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:03 np0005481680 python3.9[46882]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 12 16:49:04 np0005481680 python3.9[47034]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:07 np0005481680 python3.9[47461]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 12 16:49:08 np0005481680 ansible-async_wrapper.py[47636]: Invoked with j127764794339 300 /home/zuul/.ansible/tmp/ansible-tmp-1760302147.4787798-845-156107901344110/AnsiballZ_edpm_os_net_config.py _
Oct 12 16:49:08 np0005481680 ansible-async_wrapper.py[47639]: Starting module and watcher
Oct 12 16:49:08 np0005481680 ansible-async_wrapper.py[47639]: Start watching 47640 (300)
Oct 12 16:49:08 np0005481680 ansible-async_wrapper.py[47640]: Start module (47640)
Oct 12 16:49:08 np0005481680 ansible-async_wrapper.py[47636]: Return async_wrapper task started.
Oct 12 16:49:08 np0005481680 python3.9[47641]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 12 16:49:09 np0005481680 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 12 16:49:09 np0005481680 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 12 16:49:09 np0005481680 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 12 16:49:09 np0005481680 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 12 16:49:09 np0005481680 kernel: cfg80211: failed to load regulatory.db
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.2981] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3002] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3514] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3519] audit: op="connection-add" uuid="95eaffd1-6481-43cc-adf7-6c706c97a482" name="br-ex-br" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3533] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3535] audit: op="connection-add" uuid="63ee962d-f41d-48a1-9864-5a8164f438a6" name="br-ex-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3545] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3546] audit: op="connection-add" uuid="67f672d1-a416-41cc-a8eb-57268350b4a4" name="eth1-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3556] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3557] audit: op="connection-add" uuid="40576c34-585c-47eb-a157-1d887bcee8b1" name="vlan20-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3567] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3568] audit: op="connection-add" uuid="b318d910-b623-42e7-9e14-981f14ac1271" name="vlan21-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3577] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3579] audit: op="connection-add" uuid="10afe8b6-ccdd-4d5b-b373-feb67e3c8f15" name="vlan22-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3588] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3589] audit: op="connection-add" uuid="ce4bfba2-0bbd-45a7-813c-f44cc82a2cd8" name="vlan23-port" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3607] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3620] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3621] audit: op="connection-add" uuid="de80f591-e75f-4bf9-acb9-9e954dfb5155" name="br-ex-if" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3651] audit: op="connection-update" uuid="8fefafd4-aabf-52b9-842a-497d64cc3f86" name="ci-private-network" args="connection.timestamp,connection.port-type,connection.master,connection.controller,connection.slave-type,ipv4.addresses,ipv4.routing-rules,ipv4.routes,ipv4.dns,ipv4.method,ipv4.never-default,ipv6.addresses,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.routes,ovs-external-ids.data,ovs-interface.type" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3665] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3666] audit: op="connection-add" uuid="3a670a4f-1817-4a69-90b3-b024ec183914" name="vlan20-if" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3679] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3681] audit: op="connection-add" uuid="7e21ec6a-43a4-423f-91c8-d31872f85222" name="vlan21-if" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3694] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3696] audit: op="connection-add" uuid="d0ac4ce9-d21b-403d-b7b4-474f64bdce70" name="vlan22-if" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3708] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3710] audit: op="connection-add" uuid="296e7491-1444-4888-922c-6bec5ae6de06" name="vlan23-if" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3719] audit: op="connection-delete" uuid="38704a7a-1cf9-3e21-9c3b-42fb6d65a758" name="Wired connection 1" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3731] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3739] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3743] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (95eaffd1-6481-43cc-adf7-6c706c97a482)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3743] audit: op="connection-activate" uuid="95eaffd1-6481-43cc-adf7-6c706c97a482" name="br-ex-br" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3745] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3751] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3754] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (63ee962d-f41d-48a1-9864-5a8164f438a6)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3756] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3761] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3764] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (67f672d1-a416-41cc-a8eb-57268350b4a4)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3766] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3772] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3776] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (40576c34-585c-47eb-a157-1d887bcee8b1)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3777] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3783] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3786] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b318d910-b623-42e7-9e14-981f14ac1271)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3788] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3794] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3797] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (10afe8b6-ccdd-4d5b-b373-feb67e3c8f15)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3799] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3804] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3808] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (ce4bfba2-0bbd-45a7-813c-f44cc82a2cd8)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3809] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3811] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3813] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3817] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3821] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3825] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (de80f591-e75f-4bf9-acb9-9e954dfb5155)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3826] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3829] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3831] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3832] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3833] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3842] device (eth1): disconnecting for new activation request.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3843] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3845] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3847] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3848] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3851] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3855] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3859] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3a670a4f-1817-4a69-90b3-b024ec183914)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3860] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3862] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3864] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3865] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3868] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3872] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3876] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (7e21ec6a-43a4-423f-91c8-d31872f85222)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3877] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3880] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3882] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3883] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3885] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3890] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3893] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d0ac4ce9-d21b-403d-b7b4-474f64bdce70)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3894] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3897] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3898] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3900] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3902] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3906] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3911] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (296e7491-1444-4888-922c-6bec5ae6de06)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3911] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3914] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3916] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3917] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3918] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3928] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3930] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3933] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3935] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3940] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3943] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3946] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3950] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3951] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3956] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3958] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 kernel: ovs-system: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3961] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3962] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3967] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3972] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3974] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 systemd-udevd[47646]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:49:10 np0005481680 kernel: Timeout policy base is empty
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3976] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3979] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3983] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3985] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3986] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3991] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3995] dhcp4 (eth0): canceled DHCP transaction
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3995] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3995] dhcp4 (eth0): state changed no lease
Oct 12 16:49:10 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.3997] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4011] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4014] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47642 uid=0 result="fail" reason="Device is not activated"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4020] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4026] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4033] device (eth1): disconnecting for new activation request.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4034] audit: op="connection-activate" uuid="8fefafd4-aabf-52b9-842a-497d64cc3f86" name="ci-private-network" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4035] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4068] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4070] dhcp4 (eth0): state changed new lease, address=38.102.83.111
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4111] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47642 uid=0 result="success"
Oct 12 16:49:10 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4204] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4300] device (eth1): Activation: starting connection 'ci-private-network' (8fefafd4-aabf-52b9-842a-497d64cc3f86)
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4304] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4325] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4331] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4338] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 kernel: br-ex: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4344] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4348] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4349] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4350] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4352] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4353] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4355] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4373] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4380] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4384] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4389] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4396] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4401] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4406] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 kernel: vlan22: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4411] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4418] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 systemd-udevd[47648]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4421] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4425] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4431] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4436] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4443] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 12 16:49:10 np0005481680 kernel: vlan20: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4453] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 systemd-udevd[47647]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4509] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4515] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 kernel: vlan23: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4523] device (eth1): Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 systemd-udevd[47746]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4545] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4553] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4585] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4611] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4618] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4620] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 kernel: vlan21: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4633] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4652] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4663] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4668] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4681] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4685] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4711] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4718] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4738] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4740] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4743] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4753] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4759] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4765] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4782] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4806] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4856] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4862] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 12 16:49:10 np0005481680 NetworkManager[44859]: <info>  [1760302150.4869] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 12 16:49:11 np0005481680 NetworkManager[44859]: <info>  [1760302151.5883] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47642 uid=0 result="success"
Oct 12 16:49:11 np0005481680 NetworkManager[44859]: <info>  [1760302151.7485] checkpoint[0x5607e7c08950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 12 16:49:11 np0005481680 NetworkManager[44859]: <info>  [1760302151.7486] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.0691] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.0699] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.2505] audit: op="networking-control" arg="global-dns-configuration" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.2531] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.2559] audit: op="networking-control" arg="global-dns-configuration" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.2991] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 python3.9[48002]: ansible-ansible.legacy.async_status Invoked with jid=j127764794339.47636 mode=status _async_dir=/root/.ansible_async
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.4843] checkpoint[0x5607e7c08a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 12 16:49:12 np0005481680 NetworkManager[44859]: <info>  [1760302152.4847] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47642 uid=0 result="success"
Oct 12 16:49:12 np0005481680 ansible-async_wrapper.py[47640]: Module complete (47640)
Oct 12 16:49:13 np0005481680 ansible-async_wrapper.py[47639]: Done in kid B.
Oct 12 16:49:15 np0005481680 python3.9[48106]: ansible-ansible.legacy.async_status Invoked with jid=j127764794339.47636 mode=status _async_dir=/root/.ansible_async
Oct 12 16:49:16 np0005481680 python3.9[48206]: ansible-ansible.legacy.async_status Invoked with jid=j127764794339.47636 mode=cleanup _async_dir=/root/.ansible_async
Oct 12 16:49:17 np0005481680 python3.9[48358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:49:17 np0005481680 python3.9[48481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302156.7206635-926-119747221471080/.source.returncode _original_basename=.n1t1_uxq follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:18 np0005481680 python3.9[48633]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:49:18 np0005481680 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 12 16:49:19 np0005481680 python3.9[48760]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302158.2268744-974-20003313037166/.source.cfg _original_basename=.khj9m5bk follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:20 np0005481680 python3.9[48912]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:49:20 np0005481680 systemd[1]: Reloading Network Manager...
Oct 12 16:49:20 np0005481680 NetworkManager[44859]: <info>  [1760302160.5913] audit: op="reload" arg="0" pid=48916 uid=0 result="success"
Oct 12 16:49:20 np0005481680 NetworkManager[44859]: <info>  [1760302160.5924] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 12 16:49:20 np0005481680 systemd[1]: Reloaded Network Manager.
Oct 12 16:49:21 np0005481680 systemd[1]: session-10.scope: Deactivated successfully.
Oct 12 16:49:21 np0005481680 systemd[1]: session-10.scope: Consumed 47.228s CPU time.
Oct 12 16:49:21 np0005481680 systemd-logind[783]: Session 10 logged out. Waiting for processes to exit.
Oct 12 16:49:21 np0005481680 systemd-logind[783]: Removed session 10.
Oct 12 16:49:26 np0005481680 systemd-logind[783]: New session 11 of user zuul.
Oct 12 16:49:26 np0005481680 systemd[1]: Started Session 11 of User zuul.
Oct 12 16:49:27 np0005481680 python3.9[49100]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:49:28 np0005481680 python3.9[49254]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:49:29 np0005481680 python3.9[49448]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:49:30 np0005481680 systemd[1]: session-11.scope: Deactivated successfully.
Oct 12 16:49:30 np0005481680 systemd[1]: session-11.scope: Consumed 2.183s CPU time.
Oct 12 16:49:30 np0005481680 systemd-logind[783]: Session 11 logged out. Waiting for processes to exit.
Oct 12 16:49:30 np0005481680 systemd-logind[783]: Removed session 11.
Oct 12 16:49:30 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 16:49:36 np0005481680 systemd-logind[783]: New session 12 of user zuul.
Oct 12 16:49:36 np0005481680 systemd[1]: Started Session 12 of User zuul.
Oct 12 16:49:37 np0005481680 python3.9[49630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:49:38 np0005481680 python3.9[49784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:49:39 np0005481680 python3.9[49941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:49:40 np0005481680 python3.9[50025]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:49:42 np0005481680 python3.9[50179]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:49:43 np0005481680 python3.9[50374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:44 np0005481680 python3.9[50526]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:49:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-compat3750711894-merged.mount: Deactivated successfully.
Oct 12 16:49:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3315138155-merged.mount: Deactivated successfully.
Oct 12 16:49:44 np0005481680 podman[50527]: 2025-10-12 20:49:44.311733889 +0000 UTC m=+0.059252670 system refresh
Oct 12 16:49:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:49:45 np0005481680 python3.9[50689]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:49:46 np0005481680 python3.9[50812]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302184.6885886-197-236687159744400/.source.json follow=False _original_basename=podman_network_config.j2 checksum=5b0af44e4cc4d41a3b7217129302ea0e2e4d0f66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:49:46 np0005481680 python3.9[50964]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:49:47 np0005481680 python3.9[51087]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760302186.3086338-242-177459875281849/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:49:48 np0005481680 python3.9[51239]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:49:49 np0005481680 python3.9[51391]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:49:49 np0005481680 python3.9[51543]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:49:50 np0005481680 python3.9[51695]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:49:51 np0005481680 python3.9[51847]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:49:54 np0005481680 python3.9[52000]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:49:54 np0005481680 python3.9[52154]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:49:55 np0005481680 python3.9[52306]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:49:56 np0005481680 python3.9[52458]: ansible-service_facts Invoked
Oct 12 16:49:56 np0005481680 network[52475]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 16:49:56 np0005481680 network[52476]: 'network-scripts' will be removed from distribution in near future.
Oct 12 16:49:56 np0005481680 network[52477]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 16:50:04 np0005481680 python3.9[52932]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:50:07 np0005481680 python3.9[53085]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 12 16:50:09 np0005481680 python3.9[53237]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:09 np0005481680 python3.9[53362]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302208.5326316-638-4655704109868/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:10 np0005481680 python3.9[53516]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:11 np0005481680 python3.9[53641]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302210.0949285-683-173166865160715/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:12 np0005481680 python3.9[53795]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:14 np0005481680 python3.9[53949]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:50:15 np0005481680 python3.9[54033]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:50:18 np0005481680 python3.9[54187]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:50:19 np0005481680 python3.9[54271]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:50:19 np0005481680 chronyd[797]: chronyd exiting
Oct 12 16:50:19 np0005481680 systemd[1]: Stopping NTP client/server...
Oct 12 16:50:19 np0005481680 systemd[1]: chronyd.service: Deactivated successfully.
Oct 12 16:50:19 np0005481680 systemd[1]: Stopped NTP client/server.
Oct 12 16:50:19 np0005481680 systemd[1]: Starting NTP client/server...
Oct 12 16:50:19 np0005481680 chronyd[54279]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 12 16:50:19 np0005481680 chronyd[54279]: Frequency -25.098 +/- 0.198 ppm read from /var/lib/chrony/drift
Oct 12 16:50:19 np0005481680 chronyd[54279]: Loaded seccomp filter (level 2)
Oct 12 16:50:19 np0005481680 systemd[1]: Started NTP client/server.
Oct 12 16:50:19 np0005481680 systemd[1]: session-12.scope: Deactivated successfully.
Oct 12 16:50:19 np0005481680 systemd[1]: session-12.scope: Consumed 25.171s CPU time.
Oct 12 16:50:19 np0005481680 systemd-logind[783]: Session 12 logged out. Waiting for processes to exit.
Oct 12 16:50:19 np0005481680 systemd-logind[783]: Removed session 12.
Oct 12 16:50:24 np0005481680 systemd-logind[783]: New session 13 of user zuul.
Oct 12 16:50:24 np0005481680 systemd[1]: Started Session 13 of User zuul.
Oct 12 16:50:25 np0005481680 python3.9[54460]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:26 np0005481680 python3.9[54612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:27 np0005481680 python3.9[54735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302225.929815-62-80606968685471/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:27 np0005481680 systemd[1]: session-13.scope: Deactivated successfully.
Oct 12 16:50:27 np0005481680 systemd[1]: session-13.scope: Consumed 1.677s CPU time.
Oct 12 16:50:27 np0005481680 systemd-logind[783]: Session 13 logged out. Waiting for processes to exit.
Oct 12 16:50:27 np0005481680 systemd-logind[783]: Removed session 13.
Oct 12 16:50:32 np0005481680 systemd-logind[783]: New session 14 of user zuul.
Oct 12 16:50:32 np0005481680 systemd[1]: Started Session 14 of User zuul.
Oct 12 16:50:33 np0005481680 python3.9[54914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:50:35 np0005481680 python3.9[55070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:36 np0005481680 python3.9[55245]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:36 np0005481680 python3.9[55368]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760302235.3500583-83-79761062540847/.source.json _original_basename=.84488_mj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:37 np0005481680 python3.9[55520]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:38 np0005481680 python3.9[55643]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302237.3715558-152-255841799044964/.source _original_basename=.vtpu0o49 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:39 np0005481680 python3.9[55795]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:50:40 np0005481680 python3.9[55947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:40 np0005481680 python3.9[56070]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760302239.6096065-224-96870213213632/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:50:41 np0005481680 python3.9[56222]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:41 np0005481680 python3.9[56345]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760302240.8082368-224-153266669639126/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:50:42 np0005481680 python3.9[56497]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:43 np0005481680 python3.9[56649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:43 np0005481680 python3.9[56772]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302242.8292127-335-88305607114311/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:44 np0005481680 python3.9[56924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:45 np0005481680 python3.9[57047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302244.2895272-380-84820771367080/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:46 np0005481680 python3.9[57199]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:50:46 np0005481680 systemd[1]: Reloading.
Oct 12 16:50:46 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:50:46 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:50:46 np0005481680 systemd[1]: Reloading.
Oct 12 16:50:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:50:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:50:47 np0005481680 systemd[1]: Starting EDPM Container Shutdown...
Oct 12 16:50:47 np0005481680 systemd[1]: Finished EDPM Container Shutdown.
Oct 12 16:50:48 np0005481680 python3.9[57427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:48 np0005481680 python3.9[57550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302247.5730093-449-18892476661569/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:49 np0005481680 python3.9[57702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:50:50 np0005481680 python3.9[57825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302249.1052997-494-41229370383900/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:50:51 np0005481680 python3.9[57977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:50:51 np0005481680 systemd[1]: Reloading.
Oct 12 16:50:51 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:50:51 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:50:51 np0005481680 systemd[1]: Reloading.
Oct 12 16:50:51 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:50:51 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:50:51 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 16:50:51 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 16:50:51 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 16:50:51 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 16:50:52 np0005481680 python3.9[58203]: ansible-ansible.builtin.service_facts Invoked
Oct 12 16:50:52 np0005481680 network[58220]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 16:50:52 np0005481680 network[58221]: 'network-scripts' will be removed from distribution in near future.
Oct 12 16:50:52 np0005481680 network[58222]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 16:50:56 np0005481680 python3.9[58486]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:50:58 np0005481680 systemd[1]: Reloading.
Oct 12 16:50:58 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:50:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:50:58 np0005481680 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 12 16:50:58 np0005481680 iptables.init[58526]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 12 16:50:58 np0005481680 iptables.init[58526]: iptables: Flushing firewall rules: [  OK  ]
Oct 12 16:50:58 np0005481680 systemd[1]: iptables.service: Deactivated successfully.
Oct 12 16:50:58 np0005481680 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 12 16:50:59 np0005481680 python3.9[58722]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:51:00 np0005481680 python3.9[58876]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:51:00 np0005481680 systemd[1]: Reloading.
Oct 12 16:51:00 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:51:00 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:51:00 np0005481680 systemd[1]: Starting Netfilter Tables...
Oct 12 16:51:00 np0005481680 systemd[1]: Finished Netfilter Tables.
Oct 12 16:51:02 np0005481680 python3.9[59068]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:51:05 np0005481680 python3.9[59221]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:05 np0005481680 python3.9[59346]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302264.5691755-701-189845328211500/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:07 np0005481680 python3.9[59497]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:51:32 np0005481680 systemd[1]: session-14.scope: Deactivated successfully.
Oct 12 16:51:32 np0005481680 systemd[1]: session-14.scope: Consumed 20.265s CPU time.
Oct 12 16:51:32 np0005481680 systemd-logind[783]: Session 14 logged out. Waiting for processes to exit.
Oct 12 16:51:32 np0005481680 systemd-logind[783]: Removed session 14.
Oct 12 16:51:44 np0005481680 systemd-logind[783]: New session 15 of user zuul.
Oct 12 16:51:44 np0005481680 systemd[1]: Started Session 15 of User zuul.
Oct 12 16:51:45 np0005481680 python3.9[59690]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:51:47 np0005481680 python3.9[59846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:47 np0005481680 python3.9[60021]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:48 np0005481680 python3.9[60099]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ar_63gg9 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:49 np0005481680 python3.9[60251]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:50 np0005481680 python3.9[60329]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.1ca197n6 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:50 np0005481680 python3.9[60481]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:51:51 np0005481680 python3.9[60633]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:52 np0005481680 python3.9[60711]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:51:52 np0005481680 python3.9[60863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:53 np0005481680 python3.9[60941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:51:54 np0005481680 python3.9[61093]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:54 np0005481680 python3.9[61245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:55 np0005481680 python3.9[61323]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:56 np0005481680 python3.9[61475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:56 np0005481680 python3.9[61553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:51:58 np0005481680 python3.9[61705]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:51:58 np0005481680 systemd[1]: Reloading.
Oct 12 16:51:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:51:58 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:51:58 np0005481680 python3.9[61894]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:51:59 np0005481680 python3.9[61972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:00 np0005481680 python3.9[62124]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:00 np0005481680 python3.9[62202]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:01 np0005481680 python3.9[62354]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:52:01 np0005481680 systemd[1]: Reloading.
Oct 12 16:52:01 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:52:01 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:52:02 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 16:52:02 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 16:52:02 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 16:52:02 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 16:52:03 np0005481680 python3.9[62546]: ansible-ansible.builtin.service_facts Invoked
Oct 12 16:52:03 np0005481680 network[62563]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 16:52:03 np0005481680 network[62564]: 'network-scripts' will be removed from distribution in near future.
Oct 12 16:52:03 np0005481680 network[62565]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 16:52:08 np0005481680 python3.9[62828]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:09 np0005481680 python3.9[62906]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:10 np0005481680 python3.9[63058]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:11 np0005481680 python3.9[63210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:12 np0005481680 python3.9[63333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302330.6936111-608-154697324575896/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:13 np0005481680 python3.9[63485]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 12 16:52:13 np0005481680 systemd[1]: Starting Time & Date Service...
Oct 12 16:52:13 np0005481680 systemd[1]: Started Time & Date Service.
Oct 12 16:52:14 np0005481680 python3.9[63641]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:15 np0005481680 python3.9[63793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:15 np0005481680 python3.9[63916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302334.5330486-713-43631347235731/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:16 np0005481680 python3.9[64068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:17 np0005481680 python3.9[64191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760302336.116611-758-256267638975997/.source.yaml _original_basename=.jc7u8gml follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:18 np0005481680 python3.9[64343]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:18 np0005481680 python3.9[64466]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302337.574389-803-224101438718352/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:19 np0005481680 python3.9[64618]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:52:20 np0005481680 python3.9[64771]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:52:21 np0005481680 python3[64924]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 12 16:52:22 np0005481680 python3.9[65076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:23 np0005481680 python3.9[65199]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302342.1277072-920-222037177337924/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:24 np0005481680 python3.9[65351]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:24 np0005481680 python3.9[65474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302343.6737578-965-218161741476801/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:25 np0005481680 python3.9[65626]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:26 np0005481680 python3.9[65749]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302345.3255212-1010-76236187867689/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:27 np0005481680 python3.9[65901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:28 np0005481680 python3.9[66024]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302346.8943655-1055-171491262996177/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:29 np0005481680 python3.9[66176]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 16:52:29 np0005481680 python3.9[66299]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760302348.4736862-1100-6842228610461/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:30 np0005481680 chronyd[54279]: Selected source 51.222.12.92 (pool.ntp.org)
Oct 12 16:52:30 np0005481680 python3.9[66451]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:31 np0005481680 python3.9[66603]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:52:32 np0005481680 python3.9[66762]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:33 np0005481680 python3.9[66915]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:34 np0005481680 python3.9[67067]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:35 np0005481680 python3.9[67219]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 12 16:52:35 np0005481680 python3.9[67372]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 12 16:52:36 np0005481680 systemd[1]: session-15.scope: Deactivated successfully.
Oct 12 16:52:36 np0005481680 systemd[1]: session-15.scope: Consumed 36.466s CPU time.
Oct 12 16:52:36 np0005481680 systemd-logind[783]: Session 15 logged out. Waiting for processes to exit.
Oct 12 16:52:36 np0005481680 systemd-logind[783]: Removed session 15.
Oct 12 16:52:41 np0005481680 systemd-logind[783]: New session 16 of user zuul.
Oct 12 16:52:41 np0005481680 systemd[1]: Started Session 16 of User zuul.
Oct 12 16:52:42 np0005481680 python3.9[67553]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 12 16:52:43 np0005481680 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 12 16:52:43 np0005481680 python3.9[67705]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:52:44 np0005481680 python3.9[67859]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:52:45 np0005481680 python3.9[68011]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4NhxrmrE6kUiwxXL3Yi9FYiE8LSb9llx9a8aCxpQy0BN/z37HFsaveUs/S6/bMKbYlXctigpmXWiizLmxfgqfWehp6Ae0JBqKA6kmyPWdRMHWbWCWUgBxM15/FjaaUchPj9aRQ97rq/+SxsA65gf965h5bfZaLw9eiZRgOvTrF5uOZqtZeqhhLa6hSuz04Ge7tgfG3ZQ/2w5IghJOraXAnvcFjBaAd2BYOCFm8bVOJa/ktqAhTQjBr1UC+WQT9E5rrAPK2Y5FYF16ZsMZVfS4sOWWtb5WjjsN1CkN2aIQwBEkslq3Mxh0OL8MXo86lWGS4UTuYjJHeTrFdbcJACHFv4t2xyGZ1L6/vQR/Hs6IXt1bg/EHOIR6wZ/fnWKQSI+S2iBCecgtvPeRMJFClfTDB2qjXMbgf2WckK9GQc4YbR5g1F3yFwT73rS0GJXnfQutVqRwoutLLsV99mGde8K09i2ak5vt9f3fytvFMef1/8IF18PW0fHqck/R+M6qPs0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILEEtOxHqqLB9Xl7rlloGLVlf2DYc1jvhr2nh17CvdGv#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7NghpfjXuB3jQjR53XOAR6x9lzT8iIX/Wi1Ye+NTbUBQF+NRqUeXBfYtcFOWUtcq23Rnw/xb2wrN3GnbrB9hk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI6e1crx7qVqXhyhbhrFBWUGYOVGZUckPXjjZRcw6shnuNvN01Ag5FgAnDlOUf9AeDbMJJ7asetW38PfKwp3/wUqRxl6jCE2+lIg49G2BTQRMguppb34XFm9BAFcSc1iMhuztA8ACxAYwJ8vjbpMkNgSvJ+U80Mc/lP3PC6jJhms3AEnjV7lLZhIbI+drPqehvFl/aejMY7h+c+8NzUiayfxI/5FuGWvSQCwgfHsxSKBAO1tnopsJGNwhGbHmsPsnqgjjAQ0UooAowO7FedSCJCxrrtUUmiAyNxIOATVNFIfqW7ZXK7wunVDbA3GJS4c73Ti7FSvHVLBg5++l1EqNCtuKjyX2PMYhWt08uObIvKBPSQWGI8aQtipxRnNKLG3ZFXQqT0dS5Mv64Y1OHRdncngiRX/UuWH4HXkWBFxPGcZPhNlMI7d/g7SORIO+Ol/V3Oy0XLP2vNKNb92QKnop/OoXlXQhjYfczYlHyVrmfzyuMoqs/6Cy2PJpm9hp2AW0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKO3ATPH4ob2WViy+ekA59ZjCoRjtCwXOpQhowimFdK8#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJx1ArXrcQNJ8gj4djaajKtJoo4uOxqSz3y53KT0rL9ZZxu+bEQTTKo7s2CbDRC1r+reQ2lNcQ3me495Hz/iRwY=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrMycXYVcL+7zn1LfDS/XAo4B1Q6k5q88AOQO0V1DSy0lEH/22bkAJHbcPijfrl3dhMeGJcwogjwt3PTaeqdOiJexXWbLlsxRvSVJMBvNJX3d2P72MUbflbh5Up3C18L/utF0UCYl6dSVtlMn8JKKaLAe4rlMOU72BTSoS8TVprRknp7VVeB6An8eZLeH0Vk3dXubE2zFgd0xTHQlinEHtdg+yc9M4YYfZ8EV8vU2z9Xsa0aORHhrZRAT8CIFo9CkIbUeF9U9UR5b4sTijzhP9C3f/jgf79E6nl5e9ZzxcuKmDQ8jiLVf9bRqRhbGR+2wueXEfdYVF58M+By6HungbQnlFlaAlAq1BZolYftt6FtG4PtJpO4RILyTPU5Wb+d0orXLr7Y0xldsuHX4yy7Q4d/PlsHUH/qrAga42txPkNPTQE4+HSwcEVkRiZA1fcJsF+FWjsZCEXgMPvo/sTLe/MaxGZuSQIEQPEoSYpCSgqtVRP32knPzV5IXlXE4WYk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGU20IB+SJC12pC7UZenWEz6ArNpBeKEDHazsNAGvY/c#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMx+m4KZ0FVqQWbD//2MFxyUmjEPegKQLve0bwFOx/bTj8jI1C2rNIhPSacPtNi0AR7NLdrRkvdxWrICVRa5jBk=#012 create=True mode=0644 path=/tmp/ansible.owg043jn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:46 np0005481680 python3.9[68163]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.owg043jn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:52:47 np0005481680 python3.9[68317]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.owg043jn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:52:47 np0005481680 systemd[1]: session-16.scope: Deactivated successfully.
Oct 12 16:52:47 np0005481680 systemd[1]: session-16.scope: Consumed 4.021s CPU time.
Oct 12 16:52:47 np0005481680 systemd-logind[783]: Session 16 logged out. Waiting for processes to exit.
Oct 12 16:52:47 np0005481680 systemd-logind[783]: Removed session 16.
Oct 12 16:52:53 np0005481680 systemd-logind[783]: New session 17 of user zuul.
Oct 12 16:52:53 np0005481680 systemd[1]: Started Session 17 of User zuul.
Oct 12 16:52:54 np0005481680 python3.9[68495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:52:55 np0005481680 python3.9[68651]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 12 16:52:56 np0005481680 python3.9[68805]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 16:52:57 np0005481680 python3.9[68958]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:52:58 np0005481680 python3.9[69112]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:52:59 np0005481680 python3.9[69266]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:00 np0005481680 python3.9[69421]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:00 np0005481680 systemd[1]: session-17.scope: Deactivated successfully.
Oct 12 16:53:00 np0005481680 systemd[1]: session-17.scope: Consumed 4.880s CPU time.
Oct 12 16:53:00 np0005481680 systemd-logind[783]: Session 17 logged out. Waiting for processes to exit.
Oct 12 16:53:00 np0005481680 systemd-logind[783]: Removed session 17.
Oct 12 16:53:06 np0005481680 systemd-logind[783]: New session 18 of user zuul.
Oct 12 16:53:06 np0005481680 systemd[1]: Started Session 18 of User zuul.
Oct 12 16:53:08 np0005481680 python3.9[69599]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:53:09 np0005481680 python3.9[69755]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:53:10 np0005481680 python3.9[69839]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 12 16:53:12 np0005481680 python3.9[69990]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:13 np0005481680 python3.9[70141]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 16:53:14 np0005481680 python3.9[70291]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:14 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:53:14 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:53:15 np0005481680 python3.9[70442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:15 np0005481680 systemd[1]: session-18.scope: Deactivated successfully.
Oct 12 16:53:15 np0005481680 systemd[1]: session-18.scope: Consumed 6.307s CPU time.
Oct 12 16:53:15 np0005481680 systemd-logind[783]: Session 18 logged out. Waiting for processes to exit.
Oct 12 16:53:15 np0005481680 systemd-logind[783]: Removed session 18.
Oct 12 16:53:23 np0005481680 systemd-logind[783]: New session 19 of user zuul.
Oct 12 16:53:23 np0005481680 systemd[1]: Started Session 19 of User zuul.
Oct 12 16:53:29 np0005481680 python3[71208]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:53:31 np0005481680 python3[71303]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 12 16:53:32 np0005481680 python3[71330]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:33 np0005481680 python3[71356]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:33 np0005481680 kernel: loop: module loaded
Oct 12 16:53:33 np0005481680 kernel: loop3: detected capacity change from 0 to 41943040
Oct 12 16:53:33 np0005481680 python3[71391]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:33 np0005481680 lvm[71394]: PV /dev/loop3 not used.
Oct 12 16:53:33 np0005481680 lvm[71396]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:53:33 np0005481680 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 12 16:53:33 np0005481680 lvm[71403]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct 12 16:53:33 np0005481680 lvm[71406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:53:33 np0005481680 lvm[71406]: VG ceph_vg0 finished
Oct 12 16:53:33 np0005481680 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 12 16:53:34 np0005481680 python3[71484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:53:34 np0005481680 python3[71557]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302413.950853-33385-63041961207153/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:35 np0005481680 python3[71607]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 16:53:36 np0005481680 systemd[1]: Reloading.
Oct 12 16:53:36 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:53:36 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:53:36 np0005481680 systemd[1]: Starting Ceph OSD losetup...
Oct 12 16:53:36 np0005481680 bash[71647]: /dev/loop3: [64513]:4349701 (/var/lib/ceph-osd-0.img)
Oct 12 16:53:36 np0005481680 systemd[1]: Finished Ceph OSD losetup.
Oct 12 16:53:36 np0005481680 lvm[71650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:53:36 np0005481680 lvm[71650]: VG ceph_vg0 finished
Oct 12 16:53:38 np0005481680 python3[71674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:53:41 np0005481680 python3[71767]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 12 16:53:43 np0005481680 python3[71824]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 12 16:53:47 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 16:53:47 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 16:53:47 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 16:53:47 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 16:53:47 np0005481680 systemd[1]: run-rc151626843544e328ab49cd8b2634986.service: Deactivated successfully.
Oct 12 16:53:47 np0005481680 python3[71943]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:48 np0005481680 python3[71971]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:53:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:53:49 np0005481680 python3[72037]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:53:49 np0005481680 python3[72063]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:50 np0005481680 python3[72141]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:53:50 np0005481680 python3[72214]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302429.9825382-33577-132304427714612/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:51 np0005481680 python3[72316]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:53:51 np0005481680 python3[72389]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302431.1735852-33595-140539356452845/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:53:52 np0005481680 python3[72439]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:52 np0005481680 python3[72467]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:53 np0005481680 python3[72495]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:53:53 np0005481680 python3[72523]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:53:53 np0005481680 systemd[1]: Created slice User Slice of UID 42477.
Oct 12 16:53:53 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 12 16:53:53 np0005481680 systemd-logind[783]: New session 20 of user ceph-admin.
Oct 12 16:53:53 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 12 16:53:53 np0005481680 systemd[1]: Starting User Manager for UID 42477...
Oct 12 16:53:53 np0005481680 systemd[72531]: Queued start job for default target Main User Target.
Oct 12 16:53:53 np0005481680 systemd[72531]: Created slice User Application Slice.
Oct 12 16:53:53 np0005481680 systemd[72531]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:53:53 np0005481680 systemd[72531]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 16:53:53 np0005481680 systemd[72531]: Reached target Paths.
Oct 12 16:53:53 np0005481680 systemd[72531]: Reached target Timers.
Oct 12 16:53:53 np0005481680 systemd[72531]: Starting D-Bus User Message Bus Socket...
Oct 12 16:53:53 np0005481680 systemd[72531]: Starting Create User's Volatile Files and Directories...
Oct 12 16:53:53 np0005481680 systemd[72531]: Listening on D-Bus User Message Bus Socket.
Oct 12 16:53:53 np0005481680 systemd[72531]: Reached target Sockets.
Oct 12 16:53:53 np0005481680 systemd[72531]: Finished Create User's Volatile Files and Directories.
Oct 12 16:53:53 np0005481680 systemd[72531]: Reached target Basic System.
Oct 12 16:53:53 np0005481680 systemd[72531]: Reached target Main User Target.
Oct 12 16:53:53 np0005481680 systemd[72531]: Startup finished in 106ms.
Oct 12 16:53:53 np0005481680 systemd[1]: Started User Manager for UID 42477.
Oct 12 16:53:54 np0005481680 systemd[1]: Started Session 20 of User ceph-admin.
Oct 12 16:53:54 np0005481680 systemd[1]: session-20.scope: Deactivated successfully.
Oct 12 16:53:54 np0005481680 systemd-logind[783]: Session 20 logged out. Waiting for processes to exit.
Oct 12 16:53:54 np0005481680 systemd-logind[783]: Removed session 20.
Oct 12 16:53:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:53:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:53:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-compat2985558968-lower\x2dmapped.mount: Deactivated successfully.
Oct 12 16:54:04 np0005481680 systemd[1]: Stopping User Manager for UID 42477...
Oct 12 16:54:04 np0005481680 systemd[72531]: Activating special unit Exit the Session...
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped target Main User Target.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped target Basic System.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped target Paths.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped target Sockets.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped target Timers.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 12 16:54:04 np0005481680 systemd[72531]: Closed D-Bus User Message Bus Socket.
Oct 12 16:54:04 np0005481680 systemd[72531]: Stopped Create User's Volatile Files and Directories.
Oct 12 16:54:04 np0005481680 systemd[72531]: Removed slice User Application Slice.
Oct 12 16:54:04 np0005481680 systemd[72531]: Reached target Shutdown.
Oct 12 16:54:04 np0005481680 systemd[72531]: Finished Exit the Session.
Oct 12 16:54:04 np0005481680 systemd[72531]: Reached target Exit the Session.
Oct 12 16:54:04 np0005481680 systemd[1]: user@42477.service: Deactivated successfully.
Oct 12 16:54:04 np0005481680 systemd[1]: Stopped User Manager for UID 42477.
Oct 12 16:54:04 np0005481680 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 12 16:54:04 np0005481680 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 12 16:54:04 np0005481680 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 12 16:54:04 np0005481680 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 12 16:54:04 np0005481680 systemd[1]: Removed slice User Slice of UID 42477.
Oct 12 16:54:12 np0005481680 podman[72625]: 2025-10-12 20:54:12.892292746 +0000 UTC m=+18.574098706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:12 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:12 np0005481680 podman[72725]: 2025-10-12 20:54:12.957172273 +0000 UTC m=+0.045213390 container create 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:13 np0005481680 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 12 16:54:13 np0005481680 systemd[1]: Started libpod-conmon-9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3.scope.
Oct 12 16:54:13 np0005481680 podman[72725]: 2025-10-12 20:54:12.930143249 +0000 UTC m=+0.018184386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:13 np0005481680 podman[72725]: 2025-10-12 20:54:13.301339101 +0000 UTC m=+0.389380298 container init 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:13 np0005481680 podman[72725]: 2025-10-12 20:54:13.308711517 +0000 UTC m=+0.396752644 container start 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:13 np0005481680 podman[72725]: 2025-10-12 20:54:13.321306057 +0000 UTC m=+0.409347194 container attach 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 16:54:13 np0005481680 sleepy_tu[72741]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 12 16:54:13 np0005481680 systemd[1]: libpod-9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3.scope: Deactivated successfully.
Oct 12 16:54:13 np0005481680 podman[72725]: 2025-10-12 20:54:13.412684794 +0000 UTC m=+0.500725951 container died 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:54:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ddbda25a96b1a667a03bf2553fc2675cc4cf868807b62be2d89d23228801d745-merged.mount: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72725]: 2025-10-12 20:54:14.030575508 +0000 UTC m=+1.118616675 container remove 9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3 (image=quay.io/ceph/ceph:v19, name=sleepy_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 16:54:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-conmon-9eac59f9c46d2383276ab55ec90aa12990836a8218f9c25c19cf0ba22b030eb3.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.130191236 +0000 UTC m=+0.070411771 container create a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 16:54:14 np0005481680 systemd[1]: Started libpod-conmon-a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33.scope.
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.101219684 +0000 UTC m=+0.041440259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.211786904 +0000 UTC m=+0.152007419 container init a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.218886743 +0000 UTC m=+0.159107268 container start a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 16:54:14 np0005481680 admiring_perlman[72776]: 167 167
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.223427487 +0000 UTC m=+0.163648032 container attach a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.224836939 +0000 UTC m=+0.165057474 container died a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:54:14 np0005481680 podman[72760]: 2025-10-12 20:54:14.272105627 +0000 UTC m=+0.212326162 container remove a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33 (image=quay.io/ceph/ceph:v19, name=admiring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-conmon-a7c48cd22ba07f308fda9e9ad79d4e0eac045c6efdfc7c5c7e50e7a34ba22c33.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.358401805 +0000 UTC m=+0.057499962 container create 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:14 np0005481680 systemd[1]: Started libpod-conmon-59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f.scope.
Oct 12 16:54:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.334805381 +0000 UTC m=+0.033903548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.434849952 +0000 UTC m=+0.133948109 container init 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.444945999 +0000 UTC m=+0.144044146 container start 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.448828333 +0000 UTC m=+0.147926460 container attach 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 dreamy_chaum[72809]: AQB2FexoPCTDHBAA7wu+SzWCeofghUhLL8ZMxA==
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.487633963 +0000 UTC m=+0.186732090 container died 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:14 np0005481680 podman[72793]: 2025-10-12 20:54:14.523020764 +0000 UTC m=+0.222118891 container remove 59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f (image=quay.io/ceph/ceph:v19, name=dreamy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-conmon-59647e10e18d9c3ca5c4b890e751082458c30f27a979a38cf8008320d6aa702f.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.605763837 +0000 UTC m=+0.044521020 container create 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 16:54:14 np0005481680 systemd[1]: Started libpod-conmon-7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294.scope.
Oct 12 16:54:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.585423658 +0000 UTC m=+0.024180891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.685934443 +0000 UTC m=+0.124691626 container init 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.697256196 +0000 UTC m=+0.136013429 container start 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.701399348 +0000 UTC m=+0.140156541 container attach 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 dreamy_villani[72842]: AQB2FexoP1LhKhAAnAgZFPH9SsjCyYkElWoDCw==
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.723289841 +0000 UTC m=+0.162047074 container died 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 16:54:14 np0005481680 podman[72828]: 2025-10-12 20:54:14.763355909 +0000 UTC m=+0.202113112 container remove 7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294 (image=quay.io/ceph/ceph:v19, name=dreamy_villani, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-conmon-7098405b4b6341f53664c7e7304a290f9203cdce996e5c24b1ff679acfa36294.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.850085018 +0000 UTC m=+0.056406149 container create 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 16:54:14 np0005481680 systemd[1]: Started libpod-conmon-8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29.scope.
Oct 12 16:54:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.904412825 +0000 UTC m=+0.110733966 container init 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.915080039 +0000 UTC m=+0.121401200 container start 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.919367445 +0000 UTC m=+0.125688576 container attach 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.832282876 +0000 UTC m=+0.038604047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:14 np0005481680 adoring_wu[72880]: AQB2FexooCaVNxAAajfmeP57gMriHZGeSGn2+w==
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29.scope: Deactivated successfully.
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.935419237 +0000 UTC m=+0.141740378 container died 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 podman[72864]: 2025-10-12 20:54:14.981772749 +0000 UTC m=+0.188093880 container remove 8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29 (image=quay.io/ceph/ceph:v19, name=adoring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:14 np0005481680 systemd[1]: libpod-conmon-8e3ba25e8f30b9f754be49f1b4712d5f73d93a7f704bc3a63115b037fc699e29.scope: Deactivated successfully.
Oct 12 16:54:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ece00f1b672bc7add32e7376a48a584235b3df78dbbab1bea8c62e7be501c9fc-merged.mount: Deactivated successfully.
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.051588882 +0000 UTC m=+0.050030822 container create 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 16:54:15 np0005481680 systemd[1]: Started libpod-conmon-89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338.scope.
Oct 12 16:54:15 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd293578f216be133e26f7aee0313a0a6a3d8727f2933aea7695f1b14b6c274/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.02738213 +0000 UTC m=+0.025824120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.125923997 +0000 UTC m=+0.124365937 container init 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.132025357 +0000 UTC m=+0.130467277 container start 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.13520411 +0000 UTC m=+0.133646020 container attach 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:15 np0005481680 elated_matsumoto[72918]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 12 16:54:15 np0005481680 elated_matsumoto[72918]: setting min_mon_release = quincy
Oct 12 16:54:15 np0005481680 elated_matsumoto[72918]: /usr/bin/monmaptool: set fsid to 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:15 np0005481680 elated_matsumoto[72918]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 12 16:54:15 np0005481680 systemd[1]: libpod-89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338.scope: Deactivated successfully.
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.165311695 +0000 UTC m=+0.163753625 container died 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5bd293578f216be133e26f7aee0313a0a6a3d8727f2933aea7695f1b14b6c274-merged.mount: Deactivated successfully.
Oct 12 16:54:15 np0005481680 podman[72902]: 2025-10-12 20:54:15.203211509 +0000 UTC m=+0.201653419 container remove 89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338 (image=quay.io/ceph/ceph:v19, name=elated_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:54:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:15 np0005481680 systemd[1]: libpod-conmon-89d5dd14a92a7110cd02597a04d05f6e21a8fce186880cb483f3898382f3d338.scope: Deactivated successfully.
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.278623236 +0000 UTC m=+0.050399683 container create 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 16:54:15 np0005481680 systemd[1]: Started libpod-conmon-122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99.scope.
Oct 12 16:54:15 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee4f67eb1203171728228adbc0cccb70065d2151e0118360d2b4346f5be319c/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee4f67eb1203171728228adbc0cccb70065d2151e0118360d2b4346f5be319c/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee4f67eb1203171728228adbc0cccb70065d2151e0118360d2b4346f5be319c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee4f67eb1203171728228adbc0cccb70065d2151e0118360d2b4346f5be319c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.255001361 +0000 UTC m=+0.026777848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.352776246 +0000 UTC m=+0.124552693 container init 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.359234256 +0000 UTC m=+0.131010703 container start 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.364901032 +0000 UTC m=+0.136677479 container attach 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.430643605 +0000 UTC m=+0.202420052 container died 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 16:54:15 np0005481680 systemd[1]: libpod-122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99.scope: Deactivated successfully.
Oct 12 16:54:15 np0005481680 podman[72937]: 2025-10-12 20:54:15.472079174 +0000 UTC m=+0.243855621 container remove 122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99 (image=quay.io/ceph/ceph:v19, name=jovial_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:15 np0005481680 systemd[1]: libpod-conmon-122ae60dc8d63787078687da60b55afe60b2cbed4315ba45cfb87c25c3261c99.scope: Deactivated successfully.
Oct 12 16:54:15 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:15 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:15 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:15 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:15 np0005481680 systemd[1]: Reached target All Ceph clusters and services.
Oct 12 16:54:15 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:16 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:16 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:16 np0005481680 systemd[1]: Reached target Ceph cluster 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:16 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:16 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:16 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:16 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:16 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:16 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:16 np0005481680 systemd[1]: Created slice Slice /system/ceph-5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:16 np0005481680 systemd[1]: Reached target System Time Set.
Oct 12 16:54:16 np0005481680 systemd[1]: Reached target System Time Synchronized.
Oct 12 16:54:16 np0005481680 systemd[1]: Starting Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:54:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:17 np0005481680 podman[73232]: 2025-10-12 20:54:17.149818563 +0000 UTC m=+0.087172503 container create b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 16:54:17 np0005481680 podman[73232]: 2025-10-12 20:54:17.103245853 +0000 UTC m=+0.040599783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54448c14a5c78887aeacdf3742d88f6bfc9e2af59fc534ec078f6c48e8becfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54448c14a5c78887aeacdf3742d88f6bfc9e2af59fc534ec078f6c48e8becfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54448c14a5c78887aeacdf3742d88f6bfc9e2af59fc534ec078f6c48e8becfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b54448c14a5c78887aeacdf3742d88f6bfc9e2af59fc534ec078f6c48e8becfe/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 podman[73232]: 2025-10-12 20:54:17.282907055 +0000 UTC m=+0.220260995 container init b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:54:17 np0005481680 podman[73232]: 2025-10-12 20:54:17.292416585 +0000 UTC m=+0.229770525 container start b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:17 np0005481680 bash[73232]: b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0
Oct 12 16:54:17 np0005481680 systemd[1]: Started Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: pidfile_write: ignore empty --pid-file
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: load: jerasure load: lrc 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: RocksDB version: 7.9.2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Git sha 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: DB SUMMARY
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: DB Session ID:  OG8N7CRZ28P5UK0ID9J6
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: CURRENT file:  CURRENT
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: IDENTITY file:  IDENTITY
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                         Options.error_if_exists: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.create_if_missing: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                         Options.paranoid_checks: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                                     Options.env: 0x55badb59ec20
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                                Options.info_log: 0x55badca34d60
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.max_file_opening_threads: 16
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                              Options.statistics: (nil)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                               Options.use_fsync: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.max_log_file_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                         Options.allow_fallocate: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.use_direct_reads: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.create_missing_column_families: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                              Options.db_log_dir: 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                                 Options.wal_dir: 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.advise_random_on_open: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                    Options.write_buffer_manager: 0x55badca39900
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                            Options.rate_limiter: (nil)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.unordered_write: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                               Options.row_cache: None
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                              Options.wal_filter: None
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.allow_ingest_behind: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.two_write_queues: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.manual_wal_flush: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.wal_compression: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.atomic_flush: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.log_readahead_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.allow_data_in_errors: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.db_host_id: __hostname__
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.max_background_jobs: 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.max_background_compactions: -1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.max_subcompactions: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.max_total_wal_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                          Options.max_open_files: -1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                          Options.bytes_per_sync: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:       Options.compaction_readahead_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.max_background_flushes: -1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Compression algorithms supported:
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kZSTD supported: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kXpressCompression supported: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kBZip2Compression supported: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kLZ4Compression supported: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kZlibCompression supported: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: #011kSnappyCompression supported: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:           Options.merge_operator: 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:        Options.compaction_filter: None
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55badca34500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55badca59350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:        Options.write_buffer_size: 33554432
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:  Options.max_write_buffer_number: 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.compression: NoCompression
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.num_levels: 7
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 695446f9-d869-48df-88e4-d00a44aa150b
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302457337512, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302457339259, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "OG8N7CRZ28P5UK0ID9J6", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302457339341, "job": 1, "event": "recovery_finished"}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55badca5ae00
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: DB pointer 0x55badcb64000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55badca59350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@-1(???) e0 preinit fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : last_changed 2025-10-12T20:54:15.161334+0000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : created 2025-10-12T20:54:15.161334+0000
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,os=Linux}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).mds e1 new map
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-10-12T20:54:17:378101+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : fsmap 
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mkfs 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.408910139 +0000 UTC m=+0.063608201 container create 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 16:54:17 np0005481680 systemd[1]: Started libpod-conmon-7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1.scope.
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.379960268 +0000 UTC m=+0.034658400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:17 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03153453a8479d0768bac6afa045e6b9395a31dea6343f81980cdbdc4b52221/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03153453a8479d0768bac6afa045e6b9395a31dea6343f81980cdbdc4b52221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03153453a8479d0768bac6afa045e6b9395a31dea6343f81980cdbdc4b52221/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.498606596 +0000 UTC m=+0.153304648 container init 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.505213661 +0000 UTC m=+0.159911723 container start 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.509181276 +0000 UTC m=+0.163879308 container attach 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 12 16:54:17 np0005481680 ceph-mon[73252]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314009176' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:  cluster:
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    id:     5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    health: HEALTH_OK
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]: 
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:  services:
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    mon: 1 daemons, quorum compute-0 (age 0.344177s)
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    mgr: no daemons active
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    osd: 0 osds: 0 up, 0 in
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]: 
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:  data:
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    pools:   0 pools, 0 pgs
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    objects: 0 objects, 0 B
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    usage:   0 B used, 0 B / 0 B avail
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]:    pgs:     
Oct 12 16:54:17 np0005481680 goofy_brahmagupta[73307]: 
Oct 12 16:54:17 np0005481680 systemd[1]: libpod-7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1.scope: Deactivated successfully.
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.737921011 +0000 UTC m=+0.392619073 container died 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 16:54:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f03153453a8479d0768bac6afa045e6b9395a31dea6343f81980cdbdc4b52221-merged.mount: Deactivated successfully.
Oct 12 16:54:17 np0005481680 podman[73253]: 2025-10-12 20:54:17.785697506 +0000 UTC m=+0.440395578 container remove 7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1 (image=quay.io/ceph/ceph:v19, name=goofy_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:54:17 np0005481680 systemd[1]: libpod-conmon-7279ee697dd473d5b1cdad004e776af38be2ad020218b599120059e4d154bcb1.scope: Deactivated successfully.
Oct 12 16:54:17 np0005481680 podman[73347]: 2025-10-12 20:54:17.840900048 +0000 UTC m=+0.034696140 container create 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 16:54:17 np0005481680 systemd[1]: Started libpod-conmon-73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798.scope.
Oct 12 16:54:17 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bd2d4d41c4af750a31c05640e4f17e00f09cb0a16532dff4aa0b6b644564a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bd2d4d41c4af750a31c05640e4f17e00f09cb0a16532dff4aa0b6b644564a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bd2d4d41c4af750a31c05640e4f17e00f09cb0a16532dff4aa0b6b644564a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bd2d4d41c4af750a31c05640e4f17e00f09cb0a16532dff4aa0b6b644564a8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:17 np0005481680 podman[73347]: 2025-10-12 20:54:17.912105581 +0000 UTC m=+0.105901693 container init 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:17 np0005481680 podman[73347]: 2025-10-12 20:54:17.920809687 +0000 UTC m=+0.114605789 container start 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 16:54:17 np0005481680 podman[73347]: 2025-10-12 20:54:17.825717191 +0000 UTC m=+0.019513313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:17 np0005481680 podman[73347]: 2025-10-12 20:54:17.924194607 +0000 UTC m=+0.117990719 container attach 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/51344957' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/51344957' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 12 16:54:18 np0005481680 nervous_zhukovsky[73363]: 
Oct 12 16:54:18 np0005481680 nervous_zhukovsky[73363]: [global]
Oct 12 16:54:18 np0005481680 nervous_zhukovsky[73363]: #011fsid = 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:18 np0005481680 nervous_zhukovsky[73363]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 12 16:54:18 np0005481680 systemd[1]: libpod-73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798.scope: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73347]: 2025-10-12 20:54:18.133811679 +0000 UTC m=+0.327607791 container died 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:54:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-95bd2d4d41c4af750a31c05640e4f17e00f09cb0a16532dff4aa0b6b644564a8-merged.mount: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73347]: 2025-10-12 20:54:18.186634971 +0000 UTC m=+0.380431103 container remove 73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798 (image=quay.io/ceph/ceph:v19, name=nervous_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 16:54:18 np0005481680 systemd[1]: libpod-conmon-73ab7664f3c32e23da94d2925132d4f41cdcf4ba398ca34a03c95f2cfdaab798.scope: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.262361637 +0000 UTC m=+0.048265229 container create 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:54:18 np0005481680 systemd[1]: Started libpod-conmon-314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c.scope.
Oct 12 16:54:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08399bff153dda64a58f650704f566cba01da939eb489c0a938af8af5a7b7c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08399bff153dda64a58f650704f566cba01da939eb489c0a938af8af5a7b7c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08399bff153dda64a58f650704f566cba01da939eb489c0a938af8af5a7b7c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08399bff153dda64a58f650704f566cba01da939eb489c0a938af8af5a7b7c6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.239344761 +0000 UTC m=+0.025248363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.340950558 +0000 UTC m=+0.126854170 container init 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.353470756 +0000 UTC m=+0.139374328 container start 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.357052051 +0000 UTC m=+0.142955633 container attach 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: from='client.? 192.168.122.100:0/51344957' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: from='client.? 192.168.122.100:0/51344957' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219285186' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:54:18 np0005481680 systemd[1]: libpod-314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c.scope: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.564251482 +0000 UTC m=+0.350155044 container died 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d08399bff153dda64a58f650704f566cba01da939eb489c0a938af8af5a7b7c6-merged.mount: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73401]: 2025-10-12 20:54:18.601735645 +0000 UTC m=+0.387639197 container remove 314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c (image=quay.io/ceph/ceph:v19, name=pensive_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:18 np0005481680 systemd[1]: libpod-conmon-314d2e5bb3a8e57a932b40a1db651a3fbf4455cb2e78a79f489d34e2d867b37c.scope: Deactivated successfully.
Oct 12 16:54:18 np0005481680 systemd[1]: Stopping Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: mon.compute-0@0(leader) e1 shutdown
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 12 16:54:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0[73248]: 2025-10-12T20:54:18.831+0000 7fe3f3969640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 12 16:54:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0[73248]: 2025-10-12T20:54:18.831+0000 7fe3f3969640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 12 16:54:18 np0005481680 ceph-mon[73252]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 12 16:54:18 np0005481680 podman[73486]: 2025-10-12 20:54:18.923942736 +0000 UTC m=+0.132080653 container died b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b54448c14a5c78887aeacdf3742d88f6bfc9e2af59fc534ec078f6c48e8becfe-merged.mount: Deactivated successfully.
Oct 12 16:54:18 np0005481680 podman[73486]: 2025-10-12 20:54:18.97134816 +0000 UTC m=+0.179486127 container remove b1757c3b0afe85ba6c6528e4c81b9ebfcac5dc3d4fbd4a8332496371542821f0 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:18 np0005481680 bash[73486]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0
Oct 12 16:54:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mon.compute-0.service: Deactivated successfully.
Oct 12 16:54:19 np0005481680 systemd[1]: Stopped Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mon.compute-0.service: Consumed 1.055s CPU time.
Oct 12 16:54:19 np0005481680 systemd[1]: Starting Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:54:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 12 16:54:19 np0005481680 podman[73589]: 2025-10-12 20:54:19.398917559 +0000 UTC m=+0.064819356 container create 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df248bb5d9274a503d423a8eb40c0a60ad69065f9e0dfded6a066e6240c4a7c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df248bb5d9274a503d423a8eb40c0a60ad69065f9e0dfded6a066e6240c4a7c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df248bb5d9274a503d423a8eb40c0a60ad69065f9e0dfded6a066e6240c4a7c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df248bb5d9274a503d423a8eb40c0a60ad69065f9e0dfded6a066e6240c4a7c2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 podman[73589]: 2025-10-12 20:54:19.365863447 +0000 UTC m=+0.031765264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:19 np0005481680 podman[73589]: 2025-10-12 20:54:19.473922874 +0000 UTC m=+0.139824731 container init 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:19 np0005481680 podman[73589]: 2025-10-12 20:54:19.487151812 +0000 UTC m=+0.153053609 container start 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 16:54:19 np0005481680 bash[73589]: 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521
Oct 12 16:54:19 np0005481680 systemd[1]: Started Ceph mon.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: pidfile_write: ignore empty --pid-file
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: load: jerasure load: lrc 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: RocksDB version: 7.9.2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Git sha 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: DB SUMMARY
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: DB Session ID:  PGH78N9J3MGSV7JI8MXK
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: CURRENT file:  CURRENT
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: IDENTITY file:  IDENTITY
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58727 ; 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                         Options.error_if_exists: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.create_if_missing: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                         Options.paranoid_checks: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                                     Options.env: 0x562cd2f9dc20
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                                Options.info_log: 0x562cd393dac0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.max_file_opening_threads: 16
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                              Options.statistics: (nil)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                               Options.use_fsync: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.max_log_file_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                         Options.allow_fallocate: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.use_direct_reads: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.create_missing_column_families: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                              Options.db_log_dir: 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                                 Options.wal_dir: 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.advise_random_on_open: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                    Options.write_buffer_manager: 0x562cd3941900
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                            Options.rate_limiter: (nil)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.unordered_write: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                               Options.row_cache: None
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                              Options.wal_filter: None
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.allow_ingest_behind: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.two_write_queues: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.manual_wal_flush: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.wal_compression: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.atomic_flush: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.log_readahead_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.allow_data_in_errors: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.db_host_id: __hostname__
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.max_background_jobs: 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.max_background_compactions: -1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.max_subcompactions: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.max_total_wal_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                          Options.max_open_files: -1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                          Options.bytes_per_sync: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:       Options.compaction_readahead_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.max_background_flushes: -1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Compression algorithms supported:
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kZSTD supported: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kXpressCompression supported: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kBZip2Compression supported: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kLZ4Compression supported: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kZlibCompression supported: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: #011kSnappyCompression supported: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:           Options.merge_operator: 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:        Options.compaction_filter: None
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562cd393caa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562cd3961350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:        Options.write_buffer_size: 33554432
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:  Options.max_write_buffer_number: 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.compression: NoCompression
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.num_levels: 7
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 695446f9-d869-48df-88e4-d00a44aa150b
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302459547242, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302459561115, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58478, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56952, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54469, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302459, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302459561259, "job": 1, "event": "recovery_finished"}
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562cd3962e00
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: DB pointer 0x562cd3a6c000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.94 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.94 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562cd3961350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???) e1 preinit fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).mds e1 new map
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-10-12T20:54:17:378101+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : last_changed 2025-10-12T20:54:15.161334+0000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : created 2025-10-12T20:54:15.161334+0000
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap 
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.593235821 +0000 UTC m=+0.059036496 container create 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 16:54:19 np0005481680 systemd[1]: Started libpod-conmon-9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7.scope.
Oct 12 16:54:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db03d698c0cba33ce1b23c278137206119d6b90f74d66d6a386b3355a43c064b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db03d698c0cba33ce1b23c278137206119d6b90f74d66d6a386b3355a43c064b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db03d698c0cba33ce1b23c278137206119d6b90f74d66d6a386b3355a43c064b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.572874622 +0000 UTC m=+0.038675347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.671017157 +0000 UTC m=+0.136817852 container init 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.67720613 +0000 UTC m=+0.143006805 container start 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.679765405 +0000 UTC m=+0.145566080 container attach 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct 12 16:54:19 np0005481680 systemd[1]: libpod-9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7.scope: Deactivated successfully.
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.876846228 +0000 UTC m=+0.342646923 container died 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:54:19 np0005481680 podman[73609]: 2025-10-12 20:54:19.927748735 +0000 UTC m=+0.393549450 container remove 9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7 (image=quay.io/ceph/ceph:v19, name=nifty_cori, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:19 np0005481680 systemd[1]: libpod-conmon-9ab1578db2586e01ec12643ff14f0f97cc4502387c0f9a4fdb210fec93917fc7.scope: Deactivated successfully.
Oct 12 16:54:19 np0005481680 podman[73701]: 2025-10-12 20:54:19.987506001 +0000 UTC m=+0.036031020 container create 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:54:20 np0005481680 systemd[1]: Started libpod-conmon-3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65.scope.
Oct 12 16:54:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43f0ae1a0cf12853ca07241a367bd9ad8f70612f8b85d82c7c50eca93f34d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43f0ae1a0cf12853ca07241a367bd9ad8f70612f8b85d82c7c50eca93f34d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43f0ae1a0cf12853ca07241a367bd9ad8f70612f8b85d82c7c50eca93f34d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:20.052843912 +0000 UTC m=+0.101368901 container init 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:20.064049761 +0000 UTC m=+0.112574770 container start 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:19.972265494 +0000 UTC m=+0.020790493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:20.070040267 +0000 UTC m=+0.118565276 container attach 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:54:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct 12 16:54:20 np0005481680 systemd[1]: libpod-3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65.scope: Deactivated successfully.
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:20.343873587 +0000 UTC m=+0.392398586 container died 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cb43f0ae1a0cf12853ca07241a367bd9ad8f70612f8b85d82c7c50eca93f34d2-merged.mount: Deactivated successfully.
Oct 12 16:54:20 np0005481680 podman[73701]: 2025-10-12 20:54:20.39227357 +0000 UTC m=+0.440798559 container remove 3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65 (image=quay.io/ceph/ceph:v19, name=fervent_robinson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:20 np0005481680 systemd[1]: libpod-conmon-3cd479f969ff3c2393269c32652fb6618939c308e5b128fc301f3346c5a14b65.scope: Deactivated successfully.
Oct 12 16:54:20 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:20 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:20 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:20 np0005481680 systemd[1]: Reloading.
Oct 12 16:54:20 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:54:20 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:54:20 np0005481680 systemd[1]: Starting Ceph mgr.compute-0.fmjeht for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:54:21 np0005481680 podman[73882]: 2025-10-12 20:54:21.226096381 +0000 UTC m=+0.052412671 container create 6f8c72bc2e251a9d57c2caf59f2ae32a6f535b34f4b52ed2968041b50046fce3 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ae57585c7dd530d2178397372931b3ee4976b6ee0c86746dacd88db5632acb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ae57585c7dd530d2178397372931b3ee4976b6ee0c86746dacd88db5632acb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ae57585c7dd530d2178397372931b3ee4976b6ee0c86746dacd88db5632acb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ae57585c7dd530d2178397372931b3ee4976b6ee0c86746dacd88db5632acb/merged/var/lib/ceph/mgr/ceph-compute-0.fmjeht supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 podman[73882]: 2025-10-12 20:54:21.286994632 +0000 UTC m=+0.113310932 container init 6f8c72bc2e251a9d57c2caf59f2ae32a6f535b34f4b52ed2968041b50046fce3 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:54:21 np0005481680 podman[73882]: 2025-10-12 20:54:21.206837835 +0000 UTC m=+0.033154145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:21 np0005481680 podman[73882]: 2025-10-12 20:54:21.29714952 +0000 UTC m=+0.123465800 container start 6f8c72bc2e251a9d57c2caf59f2ae32a6f535b34f4b52ed2968041b50046fce3 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct 12 16:54:21 np0005481680 bash[73882]: 6f8c72bc2e251a9d57c2caf59f2ae32a6f535b34f4b52ed2968041b50046fce3
Oct 12 16:54:21 np0005481680 systemd[1]: Started Ceph mgr.compute-0.fmjeht for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.396716977 +0000 UTC m=+0.061131108 container create c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 16:54:21 np0005481680 systemd[1]: Started libpod-conmon-c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7.scope.
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.367949891 +0000 UTC m=+0.032364072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1aec5fc20b4394407a5c55bf50b1347fc19f7719534f1b968363809a0da33/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1aec5fc20b4394407a5c55bf50b1347fc19f7719534f1b968363809a0da33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b1aec5fc20b4394407a5c55bf50b1347fc19f7719534f1b968363809a0da33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:54:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:21.478+0000 7f561be53140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.493251465 +0000 UTC m=+0.157665576 container init c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.504710742 +0000 UTC m=+0.169124843 container start c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.508136423 +0000 UTC m=+0.172550534 container attach c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:54:21 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:54:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:21.557+0000 7f561be53140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:54:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 12 16:54:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105176282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]: 
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]: {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "health": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "status": "HEALTH_OK",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "checks": {},
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "mutes": []
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "election_epoch": 5,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "quorum": [
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        0
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    ],
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "quorum_names": [
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "compute-0"
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    ],
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "quorum_age": 2,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "monmap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "epoch": 1,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "min_mon_release_name": "squid",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_mons": 1
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "osdmap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "epoch": 1,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_osds": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_up_osds": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "osd_up_since": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_in_osds": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "osd_in_since": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_remapped_pgs": 0
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "pgmap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "pgs_by_state": [],
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_pgs": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_pools": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_objects": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "data_bytes": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "bytes_used": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "bytes_avail": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "bytes_total": 0
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "fsmap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "epoch": 1,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "btime": "2025-10-12T20:54:17:378101+0000",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "by_rank": [],
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "up:standby": 0
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "mgrmap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "available": false,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "num_standbys": 0,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "modules": [
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:            "iostat",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:            "nfs",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:            "restful"
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        ],
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "services": {}
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "servicemap": {
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "epoch": 1,
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "modified": "2025-10-12T20:54:17.381390+0000",
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:        "services": {}
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    },
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]:    "progress_events": {}
Oct 12 16:54:21 np0005481680 eloquent_satoshi[73939]: }
Oct 12 16:54:21 np0005481680 systemd[1]: libpod-c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7.scope: Deactivated successfully.
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.722305518 +0000 UTC m=+0.386719619 container died c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:21 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f5b1aec5fc20b4394407a5c55bf50b1347fc19f7719534f1b968363809a0da33-merged.mount: Deactivated successfully.
Oct 12 16:54:21 np0005481680 podman[73902]: 2025-10-12 20:54:21.762904261 +0000 UTC m=+0.427318372 container remove c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7 (image=quay.io/ceph/ceph:v19, name=eloquent_satoshi, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 16:54:21 np0005481680 systemd[1]: libpod-conmon-c0e977192f6a06c7c10585c059c030ef5efe686d1102fda509b071651bc4f5a7.scope: Deactivated successfully.
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:54:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:22.360+0000 7f561be53140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:54:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:22.980+0000 7f561be53140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:54:22 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:23.139+0000 7f561be53140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:23.207+0000 7f561be53140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:23.348+0000 7f561be53140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:54:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:54:23 np0005481680 podman[73988]: 2025-10-12 20:54:23.838888609 +0000 UTC m=+0.052189375 container create 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:54:23 np0005481680 systemd[1]: Started libpod-conmon-57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313.scope.
Oct 12 16:54:23 np0005481680 podman[73988]: 2025-10-12 20:54:23.808119454 +0000 UTC m=+0.021420310 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:23 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9966e27d5c15daa5264612cf8e75f46e911c1ba77e29aa6db1e04852450f2802/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9966e27d5c15daa5264612cf8e75f46e911c1ba77e29aa6db1e04852450f2802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9966e27d5c15daa5264612cf8e75f46e911c1ba77e29aa6db1e04852450f2802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:23 np0005481680 podman[73988]: 2025-10-12 20:54:23.934447848 +0000 UTC m=+0.147748644 container init 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 12 16:54:23 np0005481680 podman[73988]: 2025-10-12 20:54:23.940503246 +0000 UTC m=+0.153804012 container start 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 16:54:23 np0005481680 podman[73988]: 2025-10-12 20:54:23.943975238 +0000 UTC m=+0.157276024 container attach 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:54:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 12 16:54:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183890939' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]: 
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]: {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "health": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "status": "HEALTH_OK",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "checks": {},
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "mutes": []
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "election_epoch": 5,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "quorum": [
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        0
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    ],
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "quorum_names": [
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "compute-0"
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    ],
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "quorum_age": 4,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "monmap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "epoch": 1,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "min_mon_release_name": "squid",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_mons": 1
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "osdmap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "epoch": 1,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_osds": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_up_osds": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "osd_up_since": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_in_osds": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "osd_in_since": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_remapped_pgs": 0
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "pgmap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "pgs_by_state": [],
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_pgs": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_pools": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_objects": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "data_bytes": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "bytes_used": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "bytes_avail": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "bytes_total": 0
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "fsmap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "epoch": 1,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "btime": "2025-10-12T20:54:17:378101+0000",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "by_rank": [],
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "up:standby": 0
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "mgrmap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "available": false,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "num_standbys": 0,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "modules": [
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:            "iostat",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:            "nfs",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:            "restful"
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        ],
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "services": {}
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "servicemap": {
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "epoch": 1,
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "modified": "2025-10-12T20:54:17.381390+0000",
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:        "services": {}
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    },
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]:    "progress_events": {}
Oct 12 16:54:24 np0005481680 friendly_fermi[74005]: }
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:54:24 np0005481680 systemd[1]: libpod-57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313.scope: Deactivated successfully.
Oct 12 16:54:24 np0005481680 podman[73988]: 2025-10-12 20:54:24.124660329 +0000 UTC m=+0.337961085 container died 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 16:54:24 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9966e27d5c15daa5264612cf8e75f46e911c1ba77e29aa6db1e04852450f2802-merged.mount: Deactivated successfully.
Oct 12 16:54:24 np0005481680 podman[73988]: 2025-10-12 20:54:24.170416615 +0000 UTC m=+0.383717371 container remove 57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313 (image=quay.io/ceph/ceph:v19, name=friendly_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:54:24 np0005481680 systemd[1]: libpod-conmon-57a54513f5cb82f909498bbf95a0081e548900afa982877826077fd2099de313.scope: Deactivated successfully.
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.343+0000 7f561be53140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.549+0000 7f561be53140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.621+0000 7f561be53140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.685+0000 7f561be53140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.763+0000 7f561be53140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:24.834+0000 7f561be53140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:54:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:25.157+0000 7f561be53140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:25.245+0000 7f561be53140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:25.655+0000 7f561be53140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:54:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.165+0000 7f561be53140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.231+0000 7f561be53140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.304+0000 7f561be53140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:54:26 np0005481680 podman[74043]: 2025-10-12 20:54:26.219544382 +0000 UTC m=+0.026164120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:26 np0005481680 podman[74043]: 2025-10-12 20:54:26.346585056 +0000 UTC m=+0.153204804 container create ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:54:26 np0005481680 systemd[1]: Started libpod-conmon-ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5.scope.
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.439+0000 7f561be53140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49b40c8bafd8ac33fb11a6b7fb4f3f705912f7672f5176bcece1a6793345c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49b40c8bafd8ac33fb11a6b7fb4f3f705912f7672f5176bcece1a6793345c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c49b40c8bafd8ac33fb11a6b7fb4f3f705912f7672f5176bcece1a6793345c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:26 np0005481680 podman[74043]: 2025-10-12 20:54:26.484753438 +0000 UTC m=+0.291373246 container init ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:26 np0005481680 podman[74043]: 2025-10-12 20:54:26.495795243 +0000 UTC m=+0.302414961 container start ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.505+0000 7f561be53140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:54:26 np0005481680 podman[74043]: 2025-10-12 20:54:26.551655145 +0000 UTC m=+0.358274953 container attach ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.649+0000 7f561be53140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:54:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 12 16:54:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032406637' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]: 
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]: {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "health": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "status": "HEALTH_OK",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "checks": {},
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "mutes": []
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "election_epoch": 5,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "quorum": [
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        0
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    ],
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "quorum_names": [
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "compute-0"
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    ],
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "quorum_age": 7,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "monmap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "epoch": 1,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "min_mon_release_name": "squid",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_mons": 1
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "osdmap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "epoch": 1,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_osds": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_up_osds": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "osd_up_since": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_in_osds": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "osd_in_since": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_remapped_pgs": 0
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "pgmap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "pgs_by_state": [],
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_pgs": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_pools": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_objects": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "data_bytes": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "bytes_used": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "bytes_avail": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "bytes_total": 0
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "fsmap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "epoch": 1,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "btime": "2025-10-12T20:54:17:378101+0000",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "by_rank": [],
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "up:standby": 0
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "mgrmap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "available": false,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "num_standbys": 0,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "modules": [
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:            "iostat",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:            "nfs",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:            "restful"
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        ],
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "services": {}
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "servicemap": {
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "epoch": 1,
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "modified": "2025-10-12T20:54:17.381390+0000",
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:        "services": {}
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    },
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]:    "progress_events": {}
Oct 12 16:54:26 np0005481680 dazzling_pike[74060]: }
Oct 12 16:54:26 np0005481680 systemd[1]: libpod-ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5.scope: Deactivated successfully.
Oct 12 16:54:26 np0005481680 podman[74086]: 2025-10-12 20:54:26.775634479 +0000 UTC m=+0.026790219 container died ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:54:26 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2c49b40c8bafd8ac33fb11a6b7fb4f3f705912f7672f5176bcece1a6793345c1-merged.mount: Deactivated successfully.
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:26.852+0000 7f561be53140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:54:26 np0005481680 podman[74086]: 2025-10-12 20:54:26.905779125 +0000 UTC m=+0.156934815 container remove ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5 (image=quay.io/ceph/ceph:v19, name=dazzling_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:54:26 np0005481680 systemd[1]: libpod-conmon-ef4826bbb5200578f7f78d8a7de6d27f778c92cd3957aa68c7ca0ae3fd0ef4d5.scope: Deactivated successfully.
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:54:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:27.115+0000 7f561be53140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:54:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:27.185+0000 7f561be53140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x56338d3bc9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fmjeht(active, starting, since 0.0476813s)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e1 all = 1
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:54:27
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmjeht is now available
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [progress INFO root] No stored events to load
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded [] historic events
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"} v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 12 16:54:27 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: Manager daemon compute-0.fmjeht is now available
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:27 np0005481680 ceph-mon[73608]: from='mgr.14102 192.168.122.100:0/2913650304' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:28 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fmjeht(active, since 1.08381s)
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.025579231 +0000 UTC m=+0.074674497 container create 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:29 np0005481680 systemd[1]: Started libpod-conmon-61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205.scope.
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:28.993011012 +0000 UTC m=+0.042106318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5a10c729f018857df20f00a867b0416d1b87f04a33df976fef021a4fbcf320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5a10c729f018857df20f00a867b0416d1b87f04a33df976fef021a4fbcf320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5a10c729f018857df20f00a867b0416d1b87f04a33df976fef021a4fbcf320/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.13852796 +0000 UTC m=+0.187623276 container init 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.14361093 +0000 UTC m=+0.192706156 container start 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.179181015 +0000 UTC m=+0.228276251 container attach 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:29 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fmjeht(active, since 2s)
Oct 12 16:54:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 12 16:54:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814334092' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]: 
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]: {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "health": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "status": "HEALTH_OK",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "checks": {},
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "mutes": []
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "election_epoch": 5,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "quorum": [
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        0
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    ],
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "quorum_names": [
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "compute-0"
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    ],
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "quorum_age": 10,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "monmap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "epoch": 1,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "min_mon_release_name": "squid",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_mons": 1
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "osdmap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "epoch": 1,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_osds": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_up_osds": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "osd_up_since": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_in_osds": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "osd_in_since": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_remapped_pgs": 0
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "pgmap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "pgs_by_state": [],
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_pgs": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_pools": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_objects": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "data_bytes": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "bytes_used": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "bytes_avail": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "bytes_total": 0
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "fsmap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "epoch": 1,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "btime": "2025-10-12T20:54:17:378101+0000",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "by_rank": [],
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "up:standby": 0
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "mgrmap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "available": true,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "num_standbys": 0,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "modules": [
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:            "iostat",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:            "nfs",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:            "restful"
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        ],
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "services": {}
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "servicemap": {
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "epoch": 1,
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "modified": "2025-10-12T20:54:17.381390+0000",
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:        "services": {}
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    },
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]:    "progress_events": {}
Oct 12 16:54:29 np0005481680 heuristic_taussig[74199]: }
Oct 12 16:54:29 np0005481680 systemd[1]: libpod-61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205.scope: Deactivated successfully.
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.618568002 +0000 UTC m=+0.667663268 container died 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0a5a10c729f018857df20f00a867b0416d1b87f04a33df976fef021a4fbcf320-merged.mount: Deactivated successfully.
Oct 12 16:54:29 np0005481680 podman[74182]: 2025-10-12 20:54:29.655440715 +0000 UTC m=+0.704535951 container remove 61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205 (image=quay.io/ceph/ceph:v19, name=heuristic_taussig, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:29 np0005481680 systemd[1]: libpod-conmon-61a4e7592279218bcb67b91b55bf0c1850c9f269f860fb1443930de572a40205.scope: Deactivated successfully.
Oct 12 16:54:29 np0005481680 podman[74234]: 2025-10-12 20:54:29.738350654 +0000 UTC m=+0.047888380 container create d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 16:54:29 np0005481680 systemd[1]: Started libpod-conmon-d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0.scope.
Oct 12 16:54:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18a498725211670a50b019ba10ead0115b90046d15655908c48856b519791d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18a498725211670a50b019ba10ead0115b90046d15655908c48856b519791d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18a498725211670a50b019ba10ead0115b90046d15655908c48856b519791d3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18a498725211670a50b019ba10ead0115b90046d15655908c48856b519791d3/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:29 np0005481680 podman[74234]: 2025-10-12 20:54:29.712235256 +0000 UTC m=+0.021773032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:29 np0005481680 podman[74234]: 2025-10-12 20:54:29.81308461 +0000 UTC m=+0.122622356 container init d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:54:29 np0005481680 podman[74234]: 2025-10-12 20:54:29.820006604 +0000 UTC m=+0.129544330 container start d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Oct 12 16:54:29 np0005481680 podman[74234]: 2025-10-12 20:54:29.822872048 +0000 UTC m=+0.132409804 container attach d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 12 16:54:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4264638223' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:54:30 np0005481680 zealous_ellis[74250]: 
Oct 12 16:54:30 np0005481680 zealous_ellis[74250]: [global]
Oct 12 16:54:30 np0005481680 zealous_ellis[74250]: #011fsid = 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:54:30 np0005481680 zealous_ellis[74250]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 12 16:54:30 np0005481680 systemd[1]: libpod-d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0.scope: Deactivated successfully.
Oct 12 16:54:30 np0005481680 podman[74276]: 2025-10-12 20:54:30.223498895 +0000 UTC m=+0.030860308 container died d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e18a498725211670a50b019ba10ead0115b90046d15655908c48856b519791d3-merged.mount: Deactivated successfully.
Oct 12 16:54:30 np0005481680 podman[74276]: 2025-10-12 20:54:30.282831789 +0000 UTC m=+0.090193162 container remove d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0 (image=quay.io/ceph/ceph:v19, name=zealous_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:30 np0005481680 systemd[1]: libpod-conmon-d5db863f0fc6cde1e28a5be0671dfdbdf9e734e54f9707b22b238801997bd0c0.scope: Deactivated successfully.
Oct 12 16:54:30 np0005481680 podman[74289]: 2025-10-12 20:54:30.354902728 +0000 UTC m=+0.043482199 container create cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 16:54:30 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/4264638223' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:54:30 np0005481680 systemd[1]: Started libpod-conmon-cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327.scope.
Oct 12 16:54:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:30 np0005481680 podman[74289]: 2025-10-12 20:54:30.334174998 +0000 UTC m=+0.022754449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dff856084c4b6ddcc1f63f4d9bb1b9f8b816bfa81edc46619a03c9650558be0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dff856084c4b6ddcc1f63f4d9bb1b9f8b816bfa81edc46619a03c9650558be0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dff856084c4b6ddcc1f63f4d9bb1b9f8b816bfa81edc46619a03c9650558be0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:30 np0005481680 podman[74289]: 2025-10-12 20:54:30.444810121 +0000 UTC m=+0.133389652 container init cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:30 np0005481680 podman[74289]: 2025-10-12 20:54:30.454954969 +0000 UTC m=+0.143534440 container start cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:30 np0005481680 podman[74289]: 2025-10-12 20:54:30.458447792 +0000 UTC m=+0.147027223 container attach cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 16:54:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct 12 16:54:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1805981153' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:31 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1805981153' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 12 16:54:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1805981153' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  1: '-n'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  2: 'mgr.compute-0.fmjeht'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  3: '-f'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  4: '--setuser'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  5: 'ceph'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  6: '--setgroup'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  7: 'ceph'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  8: '--default-log-to-file=false'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  9: '--default-log-to-journald=true'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr respawn  exe_path /proc/self/exe
Oct 12 16:54:31 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fmjeht(active, since 4s)
Oct 12 16:54:31 np0005481680 systemd[1]: libpod-cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327.scope: Deactivated successfully.
Oct 12 16:54:31 np0005481680 podman[74289]: 2025-10-12 20:54:31.421190004 +0000 UTC m=+1.109769435 container died cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 16:54:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setuser ceph since I am not root
Oct 12 16:54:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setgroup ceph since I am not root
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:54:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:31.630+0000 7f1e608bd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:54:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1dff856084c4b6ddcc1f63f4d9bb1b9f8b816bfa81edc46619a03c9650558be0-merged.mount: Deactivated successfully.
Oct 12 16:54:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:31.708+0000 7f1e608bd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:54:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:54:31 np0005481680 podman[74289]: 2025-10-12 20:54:31.810204998 +0000 UTC m=+1.498784469 container remove cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327 (image=quay.io/ceph/ceph:v19, name=keen_mendeleev, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:31 np0005481680 podman[74364]: 2025-10-12 20:54:31.919252174 +0000 UTC m=+0.080368363 container create d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 16:54:31 np0005481680 systemd[1]: Started libpod-conmon-d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85.scope.
Oct 12 16:54:31 np0005481680 podman[74364]: 2025-10-12 20:54:31.876972291 +0000 UTC m=+0.038088490 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881044068e96fe682fe4c76591842c21438fdef14799c0c9fd77e538bdf5517/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881044068e96fe682fe4c76591842c21438fdef14799c0c9fd77e538bdf5517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881044068e96fe682fe4c76591842c21438fdef14799c0c9fd77e538bdf5517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:32 np0005481680 podman[74364]: 2025-10-12 20:54:32.024511338 +0000 UTC m=+0.185627547 container init d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:32 np0005481680 podman[74364]: 2025-10-12 20:54:32.033351379 +0000 UTC m=+0.194467568 container start d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:32 np0005481680 podman[74364]: 2025-10-12 20:54:32.066310537 +0000 UTC m=+0.227426716 container attach d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:32 np0005481680 systemd[1]: libpod-conmon-cb23f8a75f437314e0dfd60f53b8a2467dc085624f6c6434e5c5c9d7734de327.scope: Deactivated successfully.
Oct 12 16:54:32 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1805981153' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 12 16:54:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 12 16:54:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177207691' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]: {
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]:    "epoch": 5,
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]:    "available": true,
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]:    "active_name": "compute-0.fmjeht",
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]:    "num_standby": 0
Oct 12 16:54:32 np0005481680 gallant_tesla[74380]: }
Oct 12 16:54:32 np0005481680 systemd[1]: libpod-d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85.scope: Deactivated successfully.
Oct 12 16:54:32 np0005481680 podman[74364]: 2025-10-12 20:54:32.449489062 +0000 UTC m=+0.610605251 container died d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:54:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3881044068e96fe682fe4c76591842c21438fdef14799c0c9fd77e538bdf5517-merged.mount: Deactivated successfully.
Oct 12 16:54:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:32.526+0000 7f1e608bd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:54:32 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:54:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:54:32 np0005481680 podman[74364]: 2025-10-12 20:54:32.553447937 +0000 UTC m=+0.714564096 container remove d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85 (image=quay.io/ceph/ceph:v19, name=gallant_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:32 np0005481680 systemd[1]: libpod-conmon-d80aeeb9be80a718e9998b8e1b2981efe1c9ce2ac7812454ded08b5ee72e2f85.scope: Deactivated successfully.
Oct 12 16:54:32 np0005481680 podman[74430]: 2025-10-12 20:54:32.626450703 +0000 UTC m=+0.052444422 container create 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:32 np0005481680 systemd[1]: Started libpod-conmon-4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9.scope.
Oct 12 16:54:32 np0005481680 podman[74430]: 2025-10-12 20:54:32.591492336 +0000 UTC m=+0.017486075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:32 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f97220c7f92f1ff4de55a3dcb2e706b31db1d138eb7db9b7edcf0a1af2d0a12/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f97220c7f92f1ff4de55a3dcb2e706b31db1d138eb7db9b7edcf0a1af2d0a12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f97220c7f92f1ff4de55a3dcb2e706b31db1d138eb7db9b7edcf0a1af2d0a12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:32 np0005481680 podman[74430]: 2025-10-12 20:54:32.721466037 +0000 UTC m=+0.147459776 container init 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:32 np0005481680 podman[74430]: 2025-10-12 20:54:32.726221267 +0000 UTC m=+0.152215006 container start 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:32 np0005481680 podman[74430]: 2025-10-12 20:54:32.733024396 +0000 UTC m=+0.159018225 container attach 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:33.133+0000 7f1e608bd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:33.286+0000 7f1e608bd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:33.353+0000 7f1e608bd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:54:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:33.498+0000 7f1e608bd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:54:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.443+0000 7f1e608bd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.643+0000 7f1e608bd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.720+0000 7f1e608bd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.785+0000 7f1e608bd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.860+0000 7f1e608bd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:54:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:34.926+0000 7f1e608bd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:54:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:54:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:35.254+0000 7f1e608bd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:54:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:35.343+0000 7f1e608bd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:54:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:35.728+0000 7f1e608bd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:54:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.225+0000 7f1e608bd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.289+0000 7f1e608bd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.361+0000 7f1e608bd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.496+0000 7f1e608bd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.559+0000 7f1e608bd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.706+0000 7f1e608bd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:54:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:36.911+0000 7f1e608bd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:54:36 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:54:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:37.155+0000 7f1e608bd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:54:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:54:37.224+0000 7f1e608bd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmjeht restarted
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x5575cda9ed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fmjeht(active, starting, since 0.0180602s)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e1 all = 1
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:54:37
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmjeht is now available
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: Active manager daemon compute-0.fmjeht restarted
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: Manager daemon compute-0.fmjeht is now available
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: cephadm
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [progress INFO root] No stored events to load
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded [] historic events
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"} v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 12 16:54:37 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct 12 16:54:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fmjeht(active, since 1.07713s)
Oct 12 16:54:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 12 16:54:38 np0005481680 vigilant_heisenberg[74448]: {
Oct 12 16:54:38 np0005481680 vigilant_heisenberg[74448]:    "mgrmap_epoch": 7,
Oct 12 16:54:38 np0005481680 vigilant_heisenberg[74448]:    "initialized": true
Oct 12 16:54:38 np0005481680 vigilant_heisenberg[74448]: }
Oct 12 16:54:38 np0005481680 systemd[1]: libpod-4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9.scope: Deactivated successfully.
Oct 12 16:54:38 np0005481680 podman[74430]: 2025-10-12 20:54:38.335260218 +0000 UTC m=+5.761253937 container died 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: Found migration_current of "None". Setting to last migration.
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:38 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6f97220c7f92f1ff4de55a3dcb2e706b31db1d138eb7db9b7edcf0a1af2d0a12-merged.mount: Deactivated successfully.
Oct 12 16:54:38 np0005481680 podman[74430]: 2025-10-12 20:54:38.603422849 +0000 UTC m=+6.029416598 container remove 4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9 (image=quay.io/ceph/ceph:v19, name=vigilant_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 16:54:38 np0005481680 podman[74597]: 2025-10-12 20:54:38.724496615 +0000 UTC m=+0.096089862 container create 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:38 np0005481680 podman[74597]: 2025-10-12 20:54:38.651560792 +0000 UTC m=+0.023154029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:38 np0005481680 systemd[1]: Started libpod-conmon-302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f.scope.
Oct 12 16:54:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442020d651f3bd0afec8e6fda4809afe8ff6b78a84c395ad7fc77a027f2f996/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442020d651f3bd0afec8e6fda4809afe8ff6b78a84c395ad7fc77a027f2f996/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1442020d651f3bd0afec8e6fda4809afe8ff6b78a84c395ad7fc77a027f2f996/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:38 np0005481680 podman[74597]: 2025-10-12 20:54:38.830784638 +0000 UTC m=+0.202377865 container init 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:38 np0005481680 podman[74597]: 2025-10-12 20:54:38.836742262 +0000 UTC m=+0.208335519 container start 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:38 np0005481680 podman[74597]: 2025-10-12 20:54:38.871848728 +0000 UTC m=+0.243441945 container attach 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:38 np0005481680 systemd[1]: libpod-conmon-4915209b5984d23a855c5262cf18d4c08cfd49d173635f944cc188bf3483a7c9.scope: Deactivated successfully.
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:54:39 np0005481680 systemd[1]: libpod-302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f.scope: Deactivated successfully.
Oct 12 16:54:39 np0005481680 podman[74597]: 2025-10-12 20:54:39.244463955 +0000 UTC m=+0.616057182 container died 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1442020d651f3bd0afec8e6fda4809afe8ff6b78a84c395ad7fc77a027f2f996-merged.mount: Deactivated successfully.
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:39 np0005481680 podman[74597]: 2025-10-12 20:54:39.485947768 +0000 UTC m=+0.857541015 container remove 302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f (image=quay.io/ceph/ceph:v19, name=admiring_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 16:54:39 np0005481680 systemd[1]: libpod-conmon-302a19c013f32cc6d8faa0c134c5f144b55aa22c16bc3c98ec46fa2026829f9f.scope: Deactivated successfully.
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:54:39] ENGINE Bus STARTING
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:54:39] ENGINE Bus STARTING
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921783 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:54:39 np0005481680 podman[74653]: 2025-10-12 20:54:39.633973049 +0000 UTC m=+0.116040086 container create 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 16:54:39 np0005481680 podman[74653]: 2025-10-12 20:54:39.555598686 +0000 UTC m=+0.037665773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:54:39] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:54:39] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:54:39] ENGINE Client ('192.168.122.100', 35390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:54:39] ENGINE Client ('192.168.122.100', 35390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:54:39 np0005481680 systemd[1]: Started libpod-conmon-3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851.scope.
Oct 12 16:54:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f813bfa700a82fefff17624b24a24e25368c88603546250ec3086c55275b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f813bfa700a82fefff17624b24a24e25368c88603546250ec3086c55275b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f813bfa700a82fefff17624b24a24e25368c88603546250ec3086c55275b30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:39 np0005481680 podman[74653]: 2025-10-12 20:54:39.743429185 +0000 UTC m=+0.225496222 container init 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:39 np0005481680 podman[74653]: 2025-10-12 20:54:39.751185722 +0000 UTC m=+0.233252759 container start 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:54:39] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:54:39] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:54:39] ENGINE Bus STARTED
Oct 12 16:54:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:54:39] ENGINE Bus STARTED
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:54:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:54:39 np0005481680 podman[74653]: 2025-10-12 20:54:39.795409477 +0000 UTC m=+0.277476494 container attach 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_user
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fmjeht(active, since 3s)
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_config
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 12 16:54:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 12 16:54:40 np0005481680 practical_einstein[74693]: ssh user set to ceph-admin. sudo will be used
Oct 12 16:54:40 np0005481680 systemd[1]: libpod-3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851.scope: Deactivated successfully.
Oct 12 16:54:40 np0005481680 podman[74653]: 2025-10-12 20:54:40.35751295 +0000 UTC m=+0.839579977 container died 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:54:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-09f813bfa700a82fefff17624b24a24e25368c88603546250ec3086c55275b30-merged.mount: Deactivated successfully.
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:54:39] ENGINE Bus STARTING
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:40 np0005481680 podman[74653]: 2025-10-12 20:54:40.484690553 +0000 UTC m=+0.966757591 container remove 3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851 (image=quay.io/ceph/ceph:v19, name=practical_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 16:54:40 np0005481680 systemd[1]: libpod-conmon-3e3358b250c154787b875acc56db053332fb7ee9e54dbd293b8902a23a6f9851.scope: Deactivated successfully.
Oct 12 16:54:40 np0005481680 podman[74730]: 2025-10-12 20:54:40.645154559 +0000 UTC m=+0.129192851 container create 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:40 np0005481680 podman[74730]: 2025-10-12 20:54:40.56847704 +0000 UTC m=+0.052515382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:40 np0005481680 systemd[1]: Started libpod-conmon-5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d.scope.
Oct 12 16:54:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:40 np0005481680 podman[74730]: 2025-10-12 20:54:40.877656712 +0000 UTC m=+0.361695014 container init 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 16:54:40 np0005481680 podman[74730]: 2025-10-12 20:54:40.883252957 +0000 UTC m=+0.367291209 container start 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 16:54:40 np0005481680 podman[74730]: 2025-10-12 20:54:40.986796158 +0000 UTC m=+0.470834430 container attach 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Set ssh private key
Oct 12 16:54:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 12 16:54:41 np0005481680 systemd[1]: libpod-5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d.scope: Deactivated successfully.
Oct 12 16:54:41 np0005481680 podman[74730]: 2025-10-12 20:54:41.380411099 +0000 UTC m=+0.864449351 container died 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:54:39] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:54:39] ENGINE Client ('192.168.122.100', 35390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:54:39] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:54:39] ENGINE Bus STARTED
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: Set ssh ssh_user
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: Set ssh ssh_config
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: ssh user set to ceph-admin. sudo will be used
Oct 12 16:54:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-784dbe60836f217b64f7892d170720fabe10a159c862b223411e2433e1232888-merged.mount: Deactivated successfully.
Oct 12 16:54:43 np0005481680 ceph-mon[73608]: Set ssh ssh_identity_key
Oct 12 16:54:43 np0005481680 ceph-mon[73608]: Set ssh private key
Oct 12 16:54:43 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:44 np0005481680 podman[74730]: 2025-10-12 20:54:44.048517363 +0000 UTC m=+3.532555655 container remove 5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d (image=quay.io/ceph/ceph:v19, name=nice_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 16:54:44 np0005481680 systemd[1]: libpod-conmon-5394643f88c62f3e1105215be359925f076b42a074550536350869086846b64d.scope: Deactivated successfully.
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.119927569 +0000 UTC m=+0.050357769 container create 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:54:44 np0005481680 systemd[1]: Started libpod-conmon-9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300.scope.
Oct 12 16:54:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.094470176 +0000 UTC m=+0.024900426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.204914045 +0000 UTC m=+0.135344325 container init 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.21685298 +0000 UTC m=+0.147283210 container start 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.221384481 +0000 UTC m=+0.151814751 container attach 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053026 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:54:44 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct 12 16:54:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:44 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 12 16:54:44 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 12 16:54:44 np0005481680 systemd[1]: libpod-9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300.scope: Deactivated successfully.
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.636918687 +0000 UTC m=+0.567348917 container died 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 12 16:54:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-585903ff6bf40777a08d5b1faab97b3cf314dbf49ab737489dc6f5e3a717ce61-merged.mount: Deactivated successfully.
Oct 12 16:54:44 np0005481680 podman[74789]: 2025-10-12 20:54:44.690229633 +0000 UTC m=+0.620659853 container remove 9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300 (image=quay.io/ceph/ceph:v19, name=kind_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 16:54:44 np0005481680 systemd[1]: libpod-conmon-9adfcfb551d9142c3eca5da93f0b04ec0c4c021c09a72513f59271ec1c47b300.scope: Deactivated successfully.
Oct 12 16:54:44 np0005481680 podman[74845]: 2025-10-12 20:54:44.763656266 +0000 UTC m=+0.049416388 container create d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:54:44 np0005481680 systemd[1]: Started libpod-conmon-d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917.scope.
Oct 12 16:54:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:44 np0005481680 podman[74845]: 2025-10-12 20:54:44.742809245 +0000 UTC m=+0.028569347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feaf23ff6590d1ef61af15ea977d8beac644e4e4e376ef4304d3933a023ef4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feaf23ff6590d1ef61af15ea977d8beac644e4e4e376ef4304d3933a023ef4e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feaf23ff6590d1ef61af15ea977d8beac644e4e4e376ef4304d3933a023ef4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:44 np0005481680 podman[74845]: 2025-10-12 20:54:44.854228957 +0000 UTC m=+0.139989119 container init d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:44 np0005481680 podman[74845]: 2025-10-12 20:54:44.865017704 +0000 UTC m=+0.150777816 container start d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:44 np0005481680 podman[74845]: 2025-10-12 20:54:44.869387759 +0000 UTC m=+0.155147931 container attach d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:45 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:45 np0005481680 hopeful_booth[74860]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQGAhXOLTd86bHTttUtuOzp8Lpm8nE1DivnDrjCrqOMBMuYGm6QCB4PXkguiJucC3DE/8UhZRnSfNgQ2CwKMgRI8VBw+AXVJpddT2gn/qS9m3wB6y3E8EnqdoOyCve9Z8OFXVJ7UfTfefsMdX/NsJw0t8MnYbAIdnosde8mihAB4TZh1GEYcZjyxRQ1fMKMPokp/ELB7CfXWo8t8Rx5frZi8FZxoHm33U6zsVpDtyP1cMdfj/TN29Pv9VJY152BIMc+1mZcKHnoFFi9jUuMhzpKKo9v4a5eFWoH7e62cyde2twFJdMil2KxR4w/XEGNXLjv85U30tvyDz5fc27ML4EnfpH1LEJ0lfx7w2Rui04XHLpavDysHLrSM1qRXLFTWW+Yh10b5McrI5XJKCy3QW8/gu3m5zVirlEBPLs+6F9Z+NNRqx+Tx9pQsu5Teuz8IHkfmQm6BL0PLj38qzcQkTelc5aanp6vZLyq/ZMzt3uK6S/+OmvTuUzcAX5+JAYMmc= zuul@controller
Oct 12 16:54:45 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:45 np0005481680 systemd[1]: libpod-d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917.scope: Deactivated successfully.
Oct 12 16:54:45 np0005481680 podman[74845]: 2025-10-12 20:54:45.26271452 +0000 UTC m=+0.548474632 container died d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-feaf23ff6590d1ef61af15ea977d8beac644e4e4e376ef4304d3933a023ef4e0-merged.mount: Deactivated successfully.
Oct 12 16:54:45 np0005481680 podman[74845]: 2025-10-12 20:54:45.329922646 +0000 UTC m=+0.615682738 container remove d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917 (image=quay.io/ceph/ceph:v19, name=hopeful_booth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:45 np0005481680 systemd[1]: libpod-conmon-d839a95fcfb9ee822258c5e0e9a6de30a181182595fcec63f97d44201cc90917.scope: Deactivated successfully.
Oct 12 16:54:45 np0005481680 podman[74899]: 2025-10-12 20:54:45.399090858 +0000 UTC m=+0.044094192 container create ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:45 np0005481680 systemd[1]: Started libpod-conmon-ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354.scope.
Oct 12 16:54:45 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7cac808418412c39724bf37e26d3f7cb2260e37992f8dfe610600125b6bdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7cac808418412c39724bf37e26d3f7cb2260e37992f8dfe610600125b6bdb4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7cac808418412c39724bf37e26d3f7cb2260e37992f8dfe610600125b6bdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:45 np0005481680 podman[74899]: 2025-10-12 20:54:45.473145112 +0000 UTC m=+0.118148506 container init ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:45 np0005481680 podman[74899]: 2025-10-12 20:54:45.37952909 +0000 UTC m=+0.024532414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:45 np0005481680 podman[74899]: 2025-10-12 20:54:45.479720309 +0000 UTC m=+0.124723603 container start ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 16:54:45 np0005481680 podman[74899]: 2025-10-12 20:54:45.483271017 +0000 UTC m=+0.128274421 container attach ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:45 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:45 np0005481680 ceph-mon[73608]: Set ssh ssh_identity_pub
Oct 12 16:54:45 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:46 np0005481680 systemd-logind[783]: New session 22 of user ceph-admin.
Oct 12 16:54:46 np0005481680 systemd[1]: Created slice User Slice of UID 42477.
Oct 12 16:54:46 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 12 16:54:46 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 12 16:54:46 np0005481680 systemd[1]: Starting User Manager for UID 42477...
Oct 12 16:54:46 np0005481680 systemd-logind[783]: New session 24 of user ceph-admin.
Oct 12 16:54:46 np0005481680 systemd[74946]: Queued start job for default target Main User Target.
Oct 12 16:54:46 np0005481680 systemd[74946]: Created slice User Application Slice.
Oct 12 16:54:46 np0005481680 systemd[74946]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:54:46 np0005481680 systemd[74946]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 16:54:46 np0005481680 systemd[74946]: Reached target Paths.
Oct 12 16:54:46 np0005481680 systemd[74946]: Reached target Timers.
Oct 12 16:54:46 np0005481680 systemd[74946]: Starting D-Bus User Message Bus Socket...
Oct 12 16:54:46 np0005481680 systemd[74946]: Starting Create User's Volatile Files and Directories...
Oct 12 16:54:46 np0005481680 systemd[74946]: Finished Create User's Volatile Files and Directories.
Oct 12 16:54:46 np0005481680 systemd[74946]: Listening on D-Bus User Message Bus Socket.
Oct 12 16:54:46 np0005481680 systemd[74946]: Reached target Sockets.
Oct 12 16:54:46 np0005481680 systemd[74946]: Reached target Basic System.
Oct 12 16:54:46 np0005481680 systemd[74946]: Reached target Main User Target.
Oct 12 16:54:46 np0005481680 systemd[74946]: Startup finished in 143ms.
Oct 12 16:54:46 np0005481680 systemd[1]: Started User Manager for UID 42477.
Oct 12 16:54:46 np0005481680 systemd[1]: Started Session 22 of User ceph-admin.
Oct 12 16:54:46 np0005481680 systemd[1]: Started Session 24 of User ceph-admin.
Oct 12 16:54:46 np0005481680 systemd-logind[783]: New session 25 of user ceph-admin.
Oct 12 16:54:46 np0005481680 systemd[1]: Started Session 25 of User ceph-admin.
Oct 12 16:54:47 np0005481680 systemd-logind[783]: New session 26 of user ceph-admin.
Oct 12 16:54:47 np0005481680 systemd[1]: Started Session 26 of User ceph-admin.
Oct 12 16:54:47 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 12 16:54:47 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 12 16:54:47 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:47 np0005481680 systemd-logind[783]: New session 27 of user ceph-admin.
Oct 12 16:54:47 np0005481680 systemd[1]: Started Session 27 of User ceph-admin.
Oct 12 16:54:47 np0005481680 ceph-mon[73608]: Deploying cephadm binary to compute-0
Oct 12 16:54:47 np0005481680 systemd-logind[783]: New session 28 of user ceph-admin.
Oct 12 16:54:47 np0005481680 systemd[1]: Started Session 28 of User ceph-admin.
Oct 12 16:54:48 np0005481680 systemd-logind[783]: New session 29 of user ceph-admin.
Oct 12 16:54:48 np0005481680 systemd[1]: Started Session 29 of User ceph-admin.
Oct 12 16:54:48 np0005481680 systemd-logind[783]: New session 30 of user ceph-admin.
Oct 12 16:54:48 np0005481680 systemd[1]: Started Session 30 of User ceph-admin.
Oct 12 16:54:49 np0005481680 systemd-logind[783]: New session 31 of user ceph-admin.
Oct 12 16:54:49 np0005481680 systemd[1]: Started Session 31 of User ceph-admin.
Oct 12 16:54:49 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:49 np0005481680 systemd-logind[783]: New session 32 of user ceph-admin.
Oct 12 16:54:49 np0005481680 systemd[1]: Started Session 32 of User ceph-admin.
Oct 12 16:54:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:54:50 np0005481680 systemd-logind[783]: New session 33 of user ceph-admin.
Oct 12 16:54:50 np0005481680 systemd[1]: Started Session 33 of User ceph-admin.
Oct 12 16:54:50 np0005481680 systemd-logind[783]: New session 34 of user ceph-admin.
Oct 12 16:54:50 np0005481680 systemd[1]: Started Session 34 of User ceph-admin.
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Added host compute-0
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:54:51 np0005481680 magical_wilson[74916]: Added host 'compute-0' with addr '192.168.122.100'
Oct 12 16:54:51 np0005481680 systemd[1]: libpod-ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354.scope: Deactivated successfully.
Oct 12 16:54:51 np0005481680 podman[75309]: 2025-10-12 20:54:51.403290838 +0000 UTC m=+0.038894750 container died ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2f7cac808418412c39724bf37e26d3f7cb2260e37992f8dfe610600125b6bdb4-merged.mount: Deactivated successfully.
Oct 12 16:54:51 np0005481680 podman[75309]: 2025-10-12 20:54:51.461806126 +0000 UTC m=+0.097409998 container remove ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354 (image=quay.io/ceph/ceph:v19, name=magical_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:51 np0005481680 systemd[1]: libpod-conmon-ad98b422a189d1dcee88ce20f11cd6e5d45e48c527aed03aa00348d8b0800354.scope: Deactivated successfully.
Oct 12 16:54:51 np0005481680 podman[75365]: 2025-10-12 20:54:51.534529366 +0000 UTC m=+0.044514116 container create 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:51 np0005481680 systemd[1]: Started libpod-conmon-45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97.scope.
Oct 12 16:54:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ed9116324b9d449e2eef7b9f9bcf82329159d9c8e4ea7d0491bc9c110b1d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ed9116324b9d449e2eef7b9f9bcf82329159d9c8e4ea7d0491bc9c110b1d50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ed9116324b9d449e2eef7b9f9bcf82329159d9c8e4ea7d0491bc9c110b1d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:51 np0005481680 podman[75365]: 2025-10-12 20:54:51.514964058 +0000 UTC m=+0.024948828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:51 np0005481680 podman[75365]: 2025-10-12 20:54:51.631044273 +0000 UTC m=+0.141029083 container init 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:51 np0005481680 podman[75365]: 2025-10-12 20:54:51.639157491 +0000 UTC m=+0.149142261 container start 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:51 np0005481680 podman[75365]: 2025-10-12 20:54:51.646880938 +0000 UTC m=+0.156865708 container attach 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 12 16:54:51 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:54:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:51 np0005481680 wizardly_sinoussi[75382]: Scheduled mon update...
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75365]: 2025-10-12 20:54:52.005150047 +0000 UTC m=+0.515134847 container died 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:54:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c3ed9116324b9d449e2eef7b9f9bcf82329159d9c8e4ea7d0491bc9c110b1d50-merged.mount: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75365]: 2025-10-12 20:54:52.054175392 +0000 UTC m=+0.564160132 container remove 45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97 (image=quay.io/ceph/ceph:v19, name=wizardly_sinoussi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-conmon-45d7fb47c39b06e5938b9900106299e283aa18c44ee7f12c5e6d2202dfd83a97.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.126685374 +0000 UTC m=+0.046468531 container create 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 12 16:54:52 np0005481680 systemd[1]: Started libpod-conmon-76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae.scope.
Oct 12 16:54:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23e909695052bf1be39e7e421847f5595534cf32912ef3550f5507cb78cf6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23e909695052bf1be39e7e421847f5595534cf32912ef3550f5507cb78cf6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23e909695052bf1be39e7e421847f5595534cf32912ef3550f5507cb78cf6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.107511979 +0000 UTC m=+0.027295156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.217585366 +0000 UTC m=+0.137368603 container init 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.228676402 +0000 UTC m=+0.148459549 container start 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.232494189 +0000 UTC m=+0.152277356 container attach 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: Added host compute-0
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:52 np0005481680 podman[75403]: 2025-10-12 20:54:52.330402133 +0000 UTC m=+0.581019381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.432464184 +0000 UTC m=+0.033977236 container create 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 16:54:52 np0005481680 systemd[1]: Started libpod-conmon-61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323.scope.
Oct 12 16:54:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.497224969 +0000 UTC m=+0.098738051 container init 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.502650599 +0000 UTC m=+0.104163651 container start 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.50628614 +0000 UTC m=+0.107799222 container attach 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.416884218 +0000 UTC m=+0.018397290 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:52 np0005481680 vigorous_chebyshev[75515]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct 12 16:54:52 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:52 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 12 16:54:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.586932351 +0000 UTC m=+0.188445433 container died 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:52 np0005481680 xenodochial_tu[75461]: Scheduled mgr update...
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.60770027 +0000 UTC m=+0.527483417 container died 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:54:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cc242082c5e15a5a42f8884339d2854cc9b8b906bdd975cf3d5289967116115c-merged.mount: Deactivated successfully.
Oct 12 16:54:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5a23e909695052bf1be39e7e421847f5595534cf32912ef3550f5507cb78cf6b-merged.mount: Deactivated successfully.
Oct 12 16:54:52 np0005481680 podman[75499]: 2025-10-12 20:54:52.645910865 +0000 UTC m=+0.247423927 container remove 61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323 (image=quay.io/ceph/ceph:v19, name=vigorous_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:52 np0005481680 podman[75445]: 2025-10-12 20:54:52.666663573 +0000 UTC m=+0.586446720 container remove 76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae (image=quay.io/ceph/ceph:v19, name=xenodochial_tu, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-conmon-61b8c822a6bcc6f59ae542903b4e1d19d3033573d0ec4acab62a1b70f9252323.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 systemd[1]: libpod-conmon-76848ce5475c43e0e812d43f413b0fde83a467cfd164def76a695ff7f9705fae.scope: Deactivated successfully.
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct 12 16:54:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:52 np0005481680 podman[75544]: 2025-10-12 20:54:52.721625643 +0000 UTC m=+0.038209706 container create 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 16:54:52 np0005481680 systemd[1]: Started libpod-conmon-3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c.scope.
Oct 12 16:54:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4a10ba067256ea5c8f2ded9199ac5c0e503c6b58899336387b18980c6ec622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4a10ba067256ea5c8f2ded9199ac5c0e503c6b58899336387b18980c6ec622/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4a10ba067256ea5c8f2ded9199ac5c0e503c6b58899336387b18980c6ec622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:52 np0005481680 podman[75544]: 2025-10-12 20:54:52.781770356 +0000 UTC m=+0.098354439 container init 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:52 np0005481680 podman[75544]: 2025-10-12 20:54:52.789680269 +0000 UTC m=+0.106264322 container start 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 16:54:52 np0005481680 podman[75544]: 2025-10-12 20:54:52.79274047 +0000 UTC m=+0.109324523 container attach 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:54:52 np0005481680 podman[75544]: 2025-10-12 20:54:52.702332165 +0000 UTC m=+0.018916238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:53 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service crash spec with placement *
Oct 12 16:54:53 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 inspiring_roentgen[75584]: Scheduled crash update...
Oct 12 16:54:53 np0005481680 systemd[1]: libpod-3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c.scope: Deactivated successfully.
Oct 12 16:54:53 np0005481680 conmon[75584]: conmon 3b313e9c0b15a740c888 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c.scope/container/memory.events
Oct 12 16:54:53 np0005481680 podman[75544]: 2025-10-12 20:54:53.180146145 +0000 UTC m=+0.496730208 container died 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4e4a10ba067256ea5c8f2ded9199ac5c0e503c6b58899336387b18980c6ec622-merged.mount: Deactivated successfully.
Oct 12 16:54:53 np0005481680 podman[75544]: 2025-10-12 20:54:53.241806878 +0000 UTC m=+0.558390971 container remove 3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:53 np0005481680 systemd[1]: libpod-conmon-3b313e9c0b15a740c888f7ad9fefeddf232be3550b45062402fd56606b16707c.scope: Deactivated successfully.
Oct 12 16:54:53 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.305150656 +0000 UTC m=+0.040874085 container create 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: Saving service mon spec with placement count:5
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:53 np0005481680 systemd[1]: Started libpod-conmon-87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435.scope.
Oct 12 16:54:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2183b38968a07185101fcaf31f34ea9c228dac4f89b7e4ca6521309f4cf6eba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2183b38968a07185101fcaf31f34ea9c228dac4f89b7e4ca6521309f4cf6eba5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2183b38968a07185101fcaf31f34ea9c228dac4f89b7e4ca6521309f4cf6eba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.288180384 +0000 UTC m=+0.023903823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.397230357 +0000 UTC m=+0.132953796 container init 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.406515565 +0000 UTC m=+0.142239004 container start 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.417391024 +0000 UTC m=+0.153114473 container attach 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct 12 16:54:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/879954502' entity='client.admin' 
Oct 12 16:54:53 np0005481680 systemd[1]: libpod-87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435.scope: Deactivated successfully.
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.815495284 +0000 UTC m=+0.551218713 container died 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2183b38968a07185101fcaf31f34ea9c228dac4f89b7e4ca6521309f4cf6eba5-merged.mount: Deactivated successfully.
Oct 12 16:54:53 np0005481680 podman[75833]: 2025-10-12 20:54:53.842387845 +0000 UTC m=+0.074181009 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:54:53 np0005481680 podman[75717]: 2025-10-12 20:54:53.854695423 +0000 UTC m=+0.590418842 container remove 87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435 (image=quay.io/ceph/ceph:v19, name=sad_keldysh, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:53 np0005481680 systemd[1]: libpod-conmon-87893de95f0a1c830d1a047f484749968d9e8625220ecaa0fcb117ab9f846435.scope: Deactivated successfully.
Oct 12 16:54:53 np0005481680 podman[75867]: 2025-10-12 20:54:53.913981417 +0000 UTC m=+0.036960925 container create d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 16:54:53 np0005481680 systemd[1]: Started libpod-conmon-d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045.scope.
Oct 12 16:54:53 np0005481680 podman[75833]: 2025-10-12 20:54:53.957421716 +0000 UTC m=+0.189214830 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004c9b72df6cea81a5f8475011258f76c9b4488129c0a16ad7f795af5304eccd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004c9b72df6cea81a5f8475011258f76c9b4488129c0a16ad7f795af5304eccd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004c9b72df6cea81a5f8475011258f76c9b4488129c0a16ad7f795af5304eccd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:53 np0005481680 podman[75867]: 2025-10-12 20:54:53.983398877 +0000 UTC m=+0.106378385 container init d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:53 np0005481680 podman[75867]: 2025-10-12 20:54:53.990635407 +0000 UTC m=+0.113614925 container start d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:54:53 np0005481680 podman[75867]: 2025-10-12 20:54:53.896966143 +0000 UTC m=+0.019945661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:53 np0005481680 podman[75867]: 2025-10-12 20:54:53.995784918 +0000 UTC m=+0.118764456 container attach d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:54 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: Saving service mgr spec with placement count:2
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: Saving service crash spec with placement *
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/879954502' entity='client.admin' 
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:54 np0005481680 systemd[1]: libpod-d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045.scope: Deactivated successfully.
Oct 12 16:54:54 np0005481680 podman[75867]: 2025-10-12 20:54:54.330628511 +0000 UTC m=+0.453608069 container died d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 16:54:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-004c9b72df6cea81a5f8475011258f76c9b4488129c0a16ad7f795af5304eccd-merged.mount: Deactivated successfully.
Oct 12 16:54:54 np0005481680 podman[75867]: 2025-10-12 20:54:54.369735976 +0000 UTC m=+0.492715494 container remove d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045 (image=quay.io/ceph/ceph:v19, name=funny_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:54 np0005481680 systemd[1]: libpod-conmon-d389e9428fbcbcc29e8612570351780598347e7a2a2b44be5f448cadaaaf7045.scope: Deactivated successfully.
Oct 12 16:54:54 np0005481680 podman[75998]: 2025-10-12 20:54:54.448754654 +0000 UTC m=+0.054808967 container create 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:54 np0005481680 systemd[1]: Started libpod-conmon-18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f.scope.
Oct 12 16:54:54 np0005481680 podman[75998]: 2025-10-12 20:54:54.422140952 +0000 UTC m=+0.028195355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:54 np0005481680 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76029 (sysctl)
Oct 12 16:54:54 np0005481680 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 12 16:54:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879b9fd2da9d26209a4e723e2a301ac74691c1085875bdeb36b6b081ae24490/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879b9fd2da9d26209a4e723e2a301ac74691c1085875bdeb36b6b081ae24490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f879b9fd2da9d26209a4e723e2a301ac74691c1085875bdeb36b6b081ae24490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:54 np0005481680 podman[75998]: 2025-10-12 20:54:54.560593859 +0000 UTC m=+0.166648182 container init 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:54 np0005481680 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 12 16:54:54 np0005481680 podman[75998]: 2025-10-12 20:54:54.568127028 +0000 UTC m=+0.174181341 container start 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:54:54 np0005481680 podman[75998]: 2025-10-12 20:54:54.571827051 +0000 UTC m=+0.177881364 container attach 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:54:54 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:54:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:54 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Added label _admin to host compute-0
Oct 12 16:54:54 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 12 16:54:54 np0005481680 nervous_wright[76020]: Added label _admin to host compute-0
Oct 12 16:54:54 np0005481680 systemd[1]: libpod-18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f.scope: Deactivated successfully.
Oct 12 16:54:55 np0005481680 podman[76123]: 2025-10-12 20:54:55.013869267 +0000 UTC m=+0.031537477 container died 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:54:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f879b9fd2da9d26209a4e723e2a301ac74691c1085875bdeb36b6b081ae24490-merged.mount: Deactivated successfully.
Oct 12 16:54:55 np0005481680 podman[76123]: 2025-10-12 20:54:55.056421266 +0000 UTC m=+0.074089476 container remove 18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f (image=quay.io/ceph/ceph:v19, name=nervous_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:55 np0005481680 systemd[1]: libpod-conmon-18ac888b00935bb543a393a4cd416ce3dac46f900c2a7fd1075b622edbe7419f.scope: Deactivated successfully.
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.13439683 +0000 UTC m=+0.050662250 container create a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:54:55 np0005481680 systemd[1]: Started libpod-conmon-a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c.scope.
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.106826626 +0000 UTC m=+0.023092116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a54444ff5fc987f54014bf9f948e65c537ba475880bd2794f8dafef28889bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a54444ff5fc987f54014bf9f948e65c537ba475880bd2794f8dafef28889bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a54444ff5fc987f54014bf9f948e65c537ba475880bd2794f8dafef28889bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:55 np0005481680 ceph-mgr[73901]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.279368322 +0000 UTC m=+0.195633762 container init a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.290085387 +0000 UTC m=+0.206350797 container start a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.370398318 +0000 UTC m=+0.286663728 container attach a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct 12 16:54:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3919113458' entity='client.admin' 
Oct 12 16:54:55 np0005481680 dreamy_kirch[76156]: set mgr/dashboard/cluster/status
Oct 12 16:54:55 np0005481680 systemd[1]: libpod-a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c.scope: Deactivated successfully.
Oct 12 16:54:55 np0005481680 podman[76140]: 2025-10-12 20:54:55.817490641 +0000 UTC m=+0.733756221 container died a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:55 np0005481680 podman[76285]: 2025-10-12 20:54:55.824217963 +0000 UTC m=+0.035201147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:54:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a3a54444ff5fc987f54014bf9f948e65c537ba475880bd2794f8dafef28889bb-merged.mount: Deactivated successfully.
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.045555425 +0000 UTC m=+0.256538609 container create 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 16:54:56 np0005481680 systemd[1]: Started libpod-conmon-4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269.scope.
Oct 12 16:54:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:56 np0005481680 podman[76140]: 2025-10-12 20:54:56.174791088 +0000 UTC m=+1.091056528 container remove a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c (image=quay.io/ceph/ceph:v19, name=dreamy_kirch, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.285894468 +0000 UTC m=+0.496877682 container init 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.293576263 +0000 UTC m=+0.504559457 container start 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:56 np0005481680 cool_sinoussi[76315]: 167 167
Oct 12 16:54:56 np0005481680 systemd[1]: libpod-4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269.scope: Deactivated successfully.
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.333229507 +0000 UTC m=+0.544212741 container attach 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.333621679 +0000 UTC m=+0.544604863 container died 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:56 np0005481680 ceph-mon[73608]: Added label _admin to host compute-0
Oct 12 16:54:56 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3919113458' entity='client.admin' 
Oct 12 16:54:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7e8120383599a9935dbe0f88bb559edf52924c34a782a722ce79769b317bdac2-merged.mount: Deactivated successfully.
Oct 12 16:54:56 np0005481680 systemd[1]: libpod-conmon-a39175f946c3a6ce3774898f8970c47d3bb711c79dc75f74db54a5e405795c9c.scope: Deactivated successfully.
Oct 12 16:54:56 np0005481680 podman[76285]: 2025-10-12 20:54:56.588866396 +0000 UTC m=+0.799849580 container remove 4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 16:54:56 np0005481680 systemd[1]: libpod-conmon-4a893956d7a4ebf11922cd00cbd4686d7e9ceb703cff66bbf0679cc41ebe1269.scope: Deactivated successfully.
Oct 12 16:54:56 np0005481680 python3[76359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:54:56 np0005481680 podman[76365]: 2025-10-12 20:54:56.879796794 +0000 UTC m=+0.117580346 container create 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 16:54:56 np0005481680 podman[76365]: 2025-10-12 20:54:56.799167994 +0000 UTC m=+0.036951646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:54:56 np0005481680 systemd[1]: Started libpod-conmon-185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1.scope.
Oct 12 16:54:56 np0005481680 podman[76379]: 2025-10-12 20:54:56.982213507 +0000 UTC m=+0.142010945 container create f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 16:54:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdb144b854be5cb9d66aaf262355ce57517b86878272ff7882339b15c376a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdb144b854be5cb9d66aaf262355ce57517b86878272ff7882339b15c376a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdb144b854be5cb9d66aaf262355ce57517b86878272ff7882339b15c376a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdb144b854be5cb9d66aaf262355ce57517b86878272ff7882339b15c376a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:56.949783533 +0000 UTC m=+0.109580991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:57 np0005481680 systemd[1]: Started libpod-conmon-f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d.scope.
Oct 12 16:54:57 np0005481680 podman[76365]: 2025-10-12 20:54:57.094086683 +0000 UTC m=+0.331870325 container init 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:57 np0005481680 podman[76365]: 2025-10-12 20:54:57.109663309 +0000 UTC m=+0.347446901 container start 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:54:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc4ed8adea3918acc8856275f729a72e1d69046a92ac18446a6241bc86b52a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc4ed8adea3918acc8856275f729a72e1d69046a92ac18446a6241bc86b52a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:57 np0005481680 podman[76365]: 2025-10-12 20:54:57.189535497 +0000 UTC m=+0.427319079 container attach 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:54:57 np0005481680 ceph-mgr[73901]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 12 16:54:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:54:57 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:57.260413334 +0000 UTC m=+0.420210842 container init f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:57.271349967 +0000 UTC m=+0.431147435 container start f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:57.286937633 +0000 UTC m=+0.446735161 container attach f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:54:57 np0005481680 ceph-mon[73608]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 12 16:54:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct 12 16:54:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3786219354' entity='client.admin' 
Oct 12 16:54:57 np0005481680 systemd[1]: libpod-f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d.scope: Deactivated successfully.
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:57.682754036 +0000 UTC m=+0.842551484 container died f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1dc4ed8adea3918acc8856275f729a72e1d69046a92ac18446a6241bc86b52a6-merged.mount: Deactivated successfully.
Oct 12 16:54:57 np0005481680 podman[76379]: 2025-10-12 20:54:57.720836268 +0000 UTC m=+0.880633726 container remove f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d (image=quay.io/ceph/ceph:v19, name=amazing_leakey, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:54:57 np0005481680 systemd[1]: libpod-conmon-f5b4982e7d84a496f39b36d82aa5faad67affd10b595821fe8f986452710794d.scope: Deactivated successfully.
Oct 12 16:54:57 np0005481680 cool_tu[76392]: [
Oct 12 16:54:57 np0005481680 cool_tu[76392]:    {
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "available": false,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "being_replaced": false,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "ceph_device_lvm": false,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "lsm_data": {},
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "lvs": [],
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "path": "/dev/sr0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "rejected_reasons": [
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "Insufficient space (<5GB)",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "Has a FileSystem"
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        ],
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        "sys_api": {
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "actuators": null,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "device_nodes": [
Oct 12 16:54:57 np0005481680 cool_tu[76392]:                "sr0"
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            ],
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "devname": "sr0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "human_readable_size": "482.00 KB",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "id_bus": "ata",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "model": "QEMU DVD-ROM",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "nr_requests": "2",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "parent": "/dev/sr0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "partitions": {},
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "path": "/dev/sr0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "removable": "1",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "rev": "2.5+",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "ro": "0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "rotational": "0",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "sas_address": "",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "sas_device_handle": "",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "scheduler_mode": "mq-deadline",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "sectors": 0,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "sectorsize": "2048",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "size": 493568.0,
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "support_discard": "2048",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "type": "disk",
Oct 12 16:54:57 np0005481680 cool_tu[76392]:            "vendor": "QEMU"
Oct 12 16:54:57 np0005481680 cool_tu[76392]:        }
Oct 12 16:54:57 np0005481680 cool_tu[76392]:    }
Oct 12 16:54:57 np0005481680 cool_tu[76392]: ]
Oct 12 16:54:57 np0005481680 systemd[1]: libpod-185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1.scope: Deactivated successfully.
Oct 12 16:54:57 np0005481680 podman[76365]: 2025-10-12 20:54:57.97686307 +0000 UTC m=+1.214646662 container died 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 16:54:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-abdb144b854be5cb9d66aaf262355ce57517b86878272ff7882339b15c376a23-merged.mount: Deactivated successfully.
Oct 12 16:54:58 np0005481680 podman[76365]: 2025-10-12 20:54:58.045647069 +0000 UTC m=+1.283430631 container remove 185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_tu, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 16:54:58 np0005481680 systemd[1]: libpod-conmon-185b84f0f124910aa12500bbb80648ce0902a273baeaa5161c3ec11a0acc32d1.scope: Deactivated successfully.
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:54:58 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:54:58 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3786219354' entity='client.admin' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:54:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:54:58 np0005481680 ansible-async_wrapper.py[77769]: Invoked with j890764325164 30 /home/zuul/.ansible/tmp/ansible-tmp-1760302498.111943-33640-116057413876206/AnsiballZ_command.py _
Oct 12 16:54:58 np0005481680 ansible-async_wrapper.py[77820]: Starting module and watcher
Oct 12 16:54:58 np0005481680 ansible-async_wrapper.py[77820]: Start watching 77822 (30)
Oct 12 16:54:58 np0005481680 ansible-async_wrapper.py[77822]: Start module (77822)
Oct 12 16:54:58 np0005481680 ansible-async_wrapper.py[77769]: Return async_wrapper task started.
Oct 12 16:54:58 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:54:58 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:54:58 np0005481680 python3[77823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.000632958 +0000 UTC m=+0.066233045 container create 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:54:59 np0005481680 systemd[1]: Started libpod-conmon-1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be.scope.
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:58.957780978 +0000 UTC m=+0.023381105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:54:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:54:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7896cb329ab563b713e39442b0d5096206711709dd4ab5ad879ae2a85751809/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7896cb329ab563b713e39442b0d5096206711709dd4ab5ad879ae2a85751809/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.116976932 +0000 UTC m=+0.182577109 container init 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.127415598 +0000 UTC m=+0.193015685 container start 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.14436999 +0000 UTC m=+0.209970117 container attach 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:54:59 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:54:59 np0005481680 serene_bartik[77963]: 
Oct 12 16:54:59 np0005481680 serene_bartik[77963]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 12 16:54:59 np0005481680 systemd[1]: libpod-1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be.scope: Deactivated successfully.
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.506476696 +0000 UTC m=+0.572076793 container died 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:54:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d7896cb329ab563b713e39442b0d5096206711709dd4ab5ad879ae2a85751809-merged.mount: Deactivated successfully.
Oct 12 16:54:59 np0005481680 podman[77881]: 2025-10-12 20:54:59.543661438 +0000 UTC m=+0.609261525 container remove 1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be (image=quay.io/ceph/ceph:v19, name=serene_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:54:59 np0005481680 systemd[1]: libpod-conmon-1f58f5b5821b5cad86189543cb28afca7b2f0a51d4ceeb0a048fec8513e282be.scope: Deactivated successfully.
Oct 12 16:54:59 np0005481680 ansible-async_wrapper.py[77822]: Module complete (77822)
Oct 12 16:54:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:54:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:55:00 np0005481680 python3[78424]: ansible-ansible.legacy.async_status Invoked with jid=j890764325164.77769 mode=status _async_dir=/root/.ansible_async
Oct 12 16:55:00 np0005481680 python3[78576]: ansible-ansible.legacy.async_status Invoked with jid=j890764325164.77769 mode=cleanup _async_dir=/root/.ansible_async
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:00 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev e8dff2cf-afe8-44d5-8b4f-f5f789bcb1bd (Updating crash deployment (+1 -> 1))
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 12 16:55:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 12 16:55:01 np0005481680 python3[78744]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.248198679 +0000 UTC m=+0.038876639 container create 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:55:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:01 np0005481680 systemd[1]: Started libpod-conmon-443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80.scope.
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.230228754 +0000 UTC m=+0.020906734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.353051853 +0000 UTC m=+0.143729853 container init 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.366739537 +0000 UTC m=+0.157417507 container start 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:01 np0005481680 exciting_leavitt[78806]: 167 167
Oct 12 16:55:01 np0005481680 systemd[1]: libpod-443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80.scope: Deactivated successfully.
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.375212128 +0000 UTC m=+0.165890118 container attach 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.375531578 +0000 UTC m=+0.166209538 container died 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-54189c7894dde5a65c1f488a26f7e7d0098e74fe98f5d5416f8064096e19bdf4-merged.mount: Deactivated successfully.
Oct 12 16:55:01 np0005481680 podman[78790]: 2025-10-12 20:55:01.432206556 +0000 UTC m=+0.222884516 container remove 443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:01 np0005481680 systemd[1]: libpod-conmon-443fa033afa65f1ac76f421637571c0e3608b63af5d40be21543a637bb1e8b80.scope: Deactivated successfully.
Oct 12 16:55:01 np0005481680 systemd[1]: Reloading.
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:55:01 np0005481680 ceph-mon[73608]: Deploying daemon crash.compute-0 on compute-0
Oct 12 16:55:01 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:55:01 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:55:01 np0005481680 python3[78848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:01 np0005481680 podman[78885]: 2025-10-12 20:55:01.698416816 +0000 UTC m=+0.043150531 container create 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 12 16:55:01 np0005481680 podman[78885]: 2025-10-12 20:55:01.678424033 +0000 UTC m=+0.023157748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:01 np0005481680 systemd[1]: Started libpod-conmon-82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1.scope.
Oct 12 16:55:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155489a5ad5bab27dbe827562d9a7825f96b6d4ee660d31e73e9857a2d3a04c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155489a5ad5bab27dbe827562d9a7825f96b6d4ee660d31e73e9857a2d3a04c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155489a5ad5bab27dbe827562d9a7825f96b6d4ee660d31e73e9857a2d3a04c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:01 np0005481680 systemd[1]: Reloading.
Oct 12 16:55:01 np0005481680 podman[78885]: 2025-10-12 20:55:01.859234334 +0000 UTC m=+0.203968059 container init 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:01 np0005481680 podman[78885]: 2025-10-12 20:55:01.873712083 +0000 UTC m=+0.218445778 container start 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:55:01 np0005481680 podman[78885]: 2025-10-12 20:55:01.877317703 +0000 UTC m=+0.222051398 container attach 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 16:55:01 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:55:01 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:55:02 np0005481680 systemd[1]: Starting Ceph crash.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:55:02 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:55:02 np0005481680 reverent_banzai[78903]: 
Oct 12 16:55:02 np0005481680 reverent_banzai[78903]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 12 16:55:02 np0005481680 systemd[1]: libpod-82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1.scope: Deactivated successfully.
Oct 12 16:55:02 np0005481680 podman[78885]: 2025-10-12 20:55:02.279981693 +0000 UTC m=+0.624715388 container died 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-155489a5ad5bab27dbe827562d9a7825f96b6d4ee660d31e73e9857a2d3a04c9-merged.mount: Deactivated successfully.
Oct 12 16:55:02 np0005481680 podman[78885]: 2025-10-12 20:55:02.327965922 +0000 UTC m=+0.672699617 container remove 82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1 (image=quay.io/ceph/ceph:v19, name=reverent_banzai, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:02 np0005481680 systemd[1]: libpod-conmon-82b1ac311ea02450d588558897336afd7056a1c19bd643ebc548e972fb329ac1.scope: Deactivated successfully.
Oct 12 16:55:02 np0005481680 podman[79028]: 2025-10-12 20:55:02.449102106 +0000 UTC m=+0.065310055 container create 3f54836752137aaa14af809189e6c3cf53ee67ac4b1d80ef0cd8fbed0ffa8b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c894b661b8f2d8624b3113e46412b05b05bf92186587ad1f7cbfbe82278fdd62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c894b661b8f2d8624b3113e46412b05b05bf92186587ad1f7cbfbe82278fdd62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c894b661b8f2d8624b3113e46412b05b05bf92186587ad1f7cbfbe82278fdd62/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c894b661b8f2d8624b3113e46412b05b05bf92186587ad1f7cbfbe82278fdd62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 podman[79028]: 2025-10-12 20:55:02.421631915 +0000 UTC m=+0.037839914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:02 np0005481680 podman[79028]: 2025-10-12 20:55:02.527841044 +0000 UTC m=+0.144048993 container init 3f54836752137aaa14af809189e6c3cf53ee67ac4b1d80ef0cd8fbed0ffa8b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 16:55:02 np0005481680 podman[79028]: 2025-10-12 20:55:02.539871843 +0000 UTC m=+0.156079762 container start 3f54836752137aaa14af809189e6c3cf53ee67ac4b1d80ef0cd8fbed0ffa8b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:02 np0005481680 bash[79028]: 3f54836752137aaa14af809189e6c3cf53ee67ac4b1d80ef0cd8fbed0ffa8b09
Oct 12 16:55:02 np0005481680 systemd[1]: Started Ceph crash.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev e8dff2cf-afe8-44d5-8b4f-f5f789bcb1bd (Updating crash deployment (+1 -> 1))
Oct 12 16:55:02 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event e8dff2cf-afe8-44d5-8b4f-f5f789bcb1bd (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.723+0000 7efe7bf5c640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.723+0000 7efe7bf5c640 -1 AuthRegistry(0x7efe740698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.724+0000 7efe7bf5c640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.724+0000 7efe7bf5c640 -1 AuthRegistry(0x7efe7bf5aff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.725+0000 7efe79cd1640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: 2025-10-12T20:55:02.725+0000 7efe7bf5c640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 12 16:55:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 12 16:55:02 np0005481680 python3[79076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:02 np0005481680 podman[79135]: 2025-10-12 20:55:02.824031397 +0000 UTC m=+0.050185223 container create f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:02 np0005481680 systemd[1]: Started libpod-conmon-f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67.scope.
Oct 12 16:55:02 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2da01954f4361553b332a09487ef0a0c1f5bb10a05ee9f118db8b4570803581/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2da01954f4361553b332a09487ef0a0c1f5bb10a05ee9f118db8b4570803581/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2da01954f4361553b332a09487ef0a0c1f5bb10a05ee9f118db8b4570803581/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:02 np0005481680 podman[79135]: 2025-10-12 20:55:02.804938784 +0000 UTC m=+0.031092630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:02 np0005481680 podman[79135]: 2025-10-12 20:55:02.902772806 +0000 UTC m=+0.128926692 container init f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:02 np0005481680 podman[79135]: 2025-10-12 20:55:02.913197171 +0000 UTC m=+0.139350997 container start f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:55:02 np0005481680 podman[79135]: 2025-10-12 20:55:02.916228772 +0000 UTC m=+0.142382688 container attach f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 16:55:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1740306372' entity='client.admin' 
Oct 12 16:55:03 np0005481680 systemd[1]: libpod-f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67.scope: Deactivated successfully.
Oct 12 16:55:03 np0005481680 podman[79135]: 2025-10-12 20:55:03.310157713 +0000 UTC m=+0.536311549 container died f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:55:03 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d2da01954f4361553b332a09487ef0a0c1f5bb10a05ee9f118db8b4570803581-merged.mount: Deactivated successfully.
Oct 12 16:55:03 np0005481680 podman[79135]: 2025-10-12 20:55:03.346119534 +0000 UTC m=+0.572273360 container remove f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67 (image=quay.io/ceph/ceph:v19, name=relaxed_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 16:55:03 np0005481680 systemd[1]: libpod-conmon-f8532199205f6118f21a7e68fc43d5b4320271e55dd075eac541bf9ab955be67.scope: Deactivated successfully.
Oct 12 16:55:03 np0005481680 podman[79275]: 2025-10-12 20:55:03.385821449 +0000 UTC m=+0.099341262 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 16:55:03 np0005481680 podman[79275]: 2025-10-12 20:55:03.494097006 +0000 UTC m=+0.207616839 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:03 np0005481680 python3[79362]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 podman[79382]: 2025-10-12 20:55:03.760485862 +0000 UTC m=+0.041986062 container create 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 16:55:03 np0005481680 ansible-async_wrapper.py[77820]: Done in kid B.
Oct 12 16:55:03 np0005481680 systemd[1]: Started libpod-conmon-167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7.scope.
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct 12 16:55:03 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 12 16:55:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:55:03 np0005481680 podman[79382]: 2025-10-12 20:55:03.745220767 +0000 UTC m=+0.026720987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fe9e5cf59f5e5a63746dd7938fe6a63954282d2523d863bef73b92905bc5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fe9e5cf59f5e5a63746dd7938fe6a63954282d2523d863bef73b92905bc5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fe9e5cf59f5e5a63746dd7938fe6a63954282d2523d863bef73b92905bc5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:55:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:55:03 np0005481680 podman[79382]: 2025-10-12 20:55:03.858887402 +0000 UTC m=+0.140387602 container init 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:03 np0005481680 podman[79382]: 2025-10-12 20:55:03.865249122 +0000 UTC m=+0.146749352 container start 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:03 np0005481680 podman[79382]: 2025-10-12 20:55:03.868303174 +0000 UTC m=+0.149803414 container attach 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2047621441' entity='client.admin' 
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.251578752 +0000 UTC m=+0.053161213 container create 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:55:04 np0005481680 systemd[1]: libpod-167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7.scope: Deactivated successfully.
Oct 12 16:55:04 np0005481680 podman[79382]: 2025-10-12 20:55:04.259573767 +0000 UTC m=+0.541074007 container died 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:04 np0005481680 systemd[1]: Started libpod-conmon-8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea.scope.
Oct 12 16:55:04 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6c4fe9e5cf59f5e5a63746dd7938fe6a63954282d2523d863bef73b92905bc5b-merged.mount: Deactivated successfully.
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1740306372' entity='client.admin' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2047621441' entity='client.admin' 
Oct 12 16:55:04 np0005481680 podman[79382]: 2025-10-12 20:55:04.307024049 +0000 UTC m=+0.588524249 container remove 167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7 (image=quay.io/ceph/ceph:v19, name=reverent_boyd, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:04 np0005481680 systemd[1]: libpod-conmon-167afe83c51bb215666e883cca944de0f7a2b0ab2e0efa5cd1f39b9f30196ed7.scope: Deactivated successfully.
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.232279442 +0000 UTC m=+0.033861923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.325836692 +0000 UTC m=+0.127419163 container init 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.332809363 +0000 UTC m=+0.134391834 container start 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.336316349 +0000 UTC m=+0.137898810 container attach 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 16:55:04 np0005481680 dreamy_ardinghelli[79539]: 167 167
Oct 12 16:55:04 np0005481680 systemd[1]: libpod-8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea.scope: Deactivated successfully.
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.337590502 +0000 UTC m=+0.139172963 container died 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:04 np0005481680 systemd[1]: var-lib-containers-storage-overlay-941caedc51668ded32d15dbdea2de714664c1457cb606ad343837a2c01359736-merged.mount: Deactivated successfully.
Oct 12 16:55:04 np0005481680 podman[79514]: 2025-10-12 20:55:04.367797702 +0000 UTC m=+0.169380163 container remove 8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea (image=quay.io/ceph/ceph:v19, name=dreamy_ardinghelli, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:04 np0005481680 systemd[1]: libpod-conmon-8faab104e15bf77c50add198f35f430270596714c4e413b59c12148c439e70ea.scope: Deactivated successfully.
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:04 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fmjeht (unknown last config time)...
Oct 12 16:55:04 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fmjeht (unknown last config time)...
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:04 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:55:04 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:55:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:04 np0005481680 python3[79633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:04 np0005481680 podman[79636]: 2025-10-12 20:55:04.750047976 +0000 UTC m=+0.082028638 container create 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:55:04 np0005481680 systemd[1]: Started libpod-conmon-85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981.scope.
Oct 12 16:55:04 np0005481680 podman[79636]: 2025-10-12 20:55:04.711966504 +0000 UTC m=+0.043947216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bfbe4a4be450ce0d55365ee2b196c80c93e15fa098fdcc977d83da7c245f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bfbe4a4be450ce0d55365ee2b196c80c93e15fa098fdcc977d83da7c245f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8bfbe4a4be450ce0d55365ee2b196c80c93e15fa098fdcc977d83da7c245f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:04 np0005481680 podman[79636]: 2025-10-12 20:55:04.870391293 +0000 UTC m=+0.202372005 container init 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 16:55:04 np0005481680 podman[79636]: 2025-10-12 20:55:04.880779788 +0000 UTC m=+0.212760440 container start 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:04 np0005481680 podman[79665]: 2025-10-12 20:55:04.885199014 +0000 UTC m=+0.084351705 container create 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:55:04 np0005481680 podman[79636]: 2025-10-12 20:55:04.902441445 +0000 UTC m=+0.234422117 container attach 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:04 np0005481680 podman[79665]: 2025-10-12 20:55:04.840456201 +0000 UTC m=+0.039608962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:04 np0005481680 systemd[1]: Started libpod-conmon-811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d.scope.
Oct 12 16:55:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:05 np0005481680 podman[79665]: 2025-10-12 20:55:05.008824749 +0000 UTC m=+0.207977470 container init 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 16:55:05 np0005481680 podman[79665]: 2025-10-12 20:55:05.023225346 +0000 UTC m=+0.222378047 container start 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:05 np0005481680 upbeat_feistel[79687]: 167 167
Oct 12 16:55:05 np0005481680 systemd[1]: libpod-811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d.scope: Deactivated successfully.
Oct 12 16:55:05 np0005481680 podman[79665]: 2025-10-12 20:55:05.043892731 +0000 UTC m=+0.243045442 container attach 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:05 np0005481680 podman[79665]: 2025-10-12 20:55:05.044314615 +0000 UTC m=+0.243467296 container died 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cc37dcc3f95753294eef631e476687e4d6644d1f0cd311fdeacbf20f58144ec1-merged.mount: Deactivated successfully.
Oct 12 16:55:05 np0005481680 podman[79665]: 2025-10-12 20:55:05.125325789 +0000 UTC m=+0.324478470 container remove 811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d (image=quay.io/ceph/ceph:v19, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:05 np0005481680 systemd[1]: libpod-conmon-811b63171005f6c2f974ae0c6eeb80721ecaef3955edc55616265939b935506d.scope: Deactivated successfully.
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3003500330' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: Reconfiguring mgr.compute-0.fmjeht (unknown last config time)...
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:05 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3003500330' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3003500330' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 12 16:55:06 np0005481680 confident_stonebraker[79674]: set require_min_compat_client to mimic
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 12 16:55:06 np0005481680 systemd[1]: libpod-85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981.scope: Deactivated successfully.
Oct 12 16:55:06 np0005481680 conmon[79674]: conmon 85b052ea7f6311fab757 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981.scope/container/memory.events
Oct 12 16:55:06 np0005481680 podman[79636]: 2025-10-12 20:55:06.422209985 +0000 UTC m=+1.754190687 container died 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:06 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3003500330' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 12 16:55:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6a8bfbe4a4be450ce0d55365ee2b196c80c93e15fa098fdcc977d83da7c245f6-merged.mount: Deactivated successfully.
Oct 12 16:55:06 np0005481680 podman[79636]: 2025-10-12 20:55:06.520740159 +0000 UTC m=+1.852720821 container remove 85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981 (image=quay.io/ceph/ceph:v19, name=confident_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 16:55:06 np0005481680 systemd[1]: libpod-conmon-85b052ea7f6311fab757d51f4b28cef7be6de9a40b168a3445d299a97ff00981.scope: Deactivated successfully.
Oct 12 16:55:07 np0005481680 python3[79788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:07 np0005481680 podman[79789]: 2025-10-12 20:55:07.226227963 +0000 UTC m=+0.049654827 container create 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:07 np0005481680 systemd[1]: Started libpod-conmon-75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72.scope.
Oct 12 16:55:07 np0005481680 podman[79789]: 2025-10-12 20:55:07.20474738 +0000 UTC m=+0.028174204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33fcd194f64cb643d9520157d54909407548f2aa8e4317417bd76f5aa012c40b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33fcd194f64cb643d9520157d54909407548f2aa8e4317417bd76f5aa012c40b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33fcd194f64cb643d9520157d54909407548f2aa8e4317417bd76f5aa012c40b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 1 completed events
Oct 12 16:55:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:07 np0005481680 podman[79789]: 2025-10-12 20:55:07.34870341 +0000 UTC m=+0.172130284 container init 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 12 16:55:07 np0005481680 podman[79789]: 2025-10-12 20:55:07.356467907 +0000 UTC m=+0.179894751 container start 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:55:07 np0005481680 podman[79789]: 2025-10-12 20:55:07.360268634 +0000 UTC m=+0.183695518 container attach 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:55:07 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Added host compute-0
Oct 12 16:55:08 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:08 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:09 np0005481680 ceph-mon[73608]: Added host compute-0
Oct 12 16:55:09 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct 12 16:55:09 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct 12 16:55:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:11 np0005481680 ceph-mon[73608]: Deploying cephadm binary to compute-1
Oct 12 16:55:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:13 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Added host compute-1
Oct 12 16:55:13 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-1
Oct 12 16:55:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: Added host compute-1
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:14 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct 12 16:55:14 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct 12 16:55:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:16 np0005481680 ceph-mon[73608]: Deploying cephadm binary to compute-2
Oct 12 16:55:16 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Added host compute-2
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Added host compute-2
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct 12 16:55:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Added host 'compute-0' with addr '192.168.122.100'
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Added host 'compute-1' with addr '192.168.122.101'
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Added host 'compute-2' with addr '192.168.122.102'
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Scheduled mon update...
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Scheduled mgr update...
Oct 12 16:55:18 np0005481680 youthful_tu[79804]: Scheduled osd.default_drive_group update...
Oct 12 16:55:18 np0005481680 systemd[1]: libpod-75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72.scope: Deactivated successfully.
Oct 12 16:55:18 np0005481680 podman[79789]: 2025-10-12 20:55:18.826284032 +0000 UTC m=+11.649710876 container died 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-33fcd194f64cb643d9520157d54909407548f2aa8e4317417bd76f5aa012c40b-merged.mount: Deactivated successfully.
Oct 12 16:55:18 np0005481680 podman[79789]: 2025-10-12 20:55:18.874343694 +0000 UTC m=+11.697770518 container remove 75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72 (image=quay.io/ceph/ceph:v19, name=youthful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:18 np0005481680 systemd[1]: libpod-conmon-75de971825620c2fb9c43e4117dce86508386e53831c7fd371e7b36ccf2f9b72.scope: Deactivated successfully.
Oct 12 16:55:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:19 np0005481680 python3[79960]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:19 np0005481680 podman[79962]: 2025-10-12 20:55:19.485591325 +0000 UTC m=+0.054987343 container create 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:19 np0005481680 systemd[1]: Started libpod-conmon-01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759.scope.
Oct 12 16:55:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a577f5f4eff2c52c58d40feab05cc37387d71609f525f5d5f821b6c78d82b7eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a577f5f4eff2c52c58d40feab05cc37387d71609f525f5d5f821b6c78d82b7eb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a577f5f4eff2c52c58d40feab05cc37387d71609f525f5d5f821b6c78d82b7eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:19 np0005481680 podman[79962]: 2025-10-12 20:55:19.456691338 +0000 UTC m=+0.026087406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:19 np0005481680 podman[79962]: 2025-10-12 20:55:19.565399229 +0000 UTC m=+0.134795227 container init 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 16:55:19 np0005481680 podman[79962]: 2025-10-12 20:55:19.574937385 +0000 UTC m=+0.144333383 container start 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:19 np0005481680 podman[79962]: 2025-10-12 20:55:19.578986319 +0000 UTC m=+0.148382317 container attach 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Added host compute-2
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Marking host: compute-1 for OSDSpec preview refresh.
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct 12 16:55:19 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 12 16:55:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2848602875' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 12 16:55:20 np0005481680 sad_saha[79978]: 
Oct 12 16:55:20 np0005481680 sad_saha[79978]: {"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":60,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-12T20:54:17:378101+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-12T20:54:17.381390+0000","services":{}},"progress_events":{}}
Oct 12 16:55:20 np0005481680 systemd[1]: libpod-01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759.scope: Deactivated successfully.
Oct 12 16:55:20 np0005481680 conmon[79978]: conmon 01b2bd47760b71698dff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759.scope/container/memory.events
Oct 12 16:55:20 np0005481680 podman[79962]: 2025-10-12 20:55:20.02905084 +0000 UTC m=+0.598446848 container died 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a577f5f4eff2c52c58d40feab05cc37387d71609f525f5d5f821b6c78d82b7eb-merged.mount: Deactivated successfully.
Oct 12 16:55:20 np0005481680 podman[79962]: 2025-10-12 20:55:20.06921869 +0000 UTC m=+0.638614708 container remove 01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759 (image=quay.io/ceph/ceph:v19, name=sad_saha, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:20 np0005481680 systemd[1]: libpod-conmon-01b2bd47760b71698dff1a01c2941d3f174af5b9daa6c83b10a0e14b332ca759.scope: Deactivated successfully.
Oct 12 16:55:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:55:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:33 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:55:33 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:55:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:55:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:55:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:55:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:55:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:35 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:55:35 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:55:35 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 5c744f09-a39b-4297-8156-f098465d0d51 (Updating crash deployment (+1 -> 2))
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:55:36.054+0000 7f1deed25640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: service_name: mon
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: placement:
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  hosts:
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-0
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-1
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-2
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:55:36.055+0000 7f1deed25640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: service_name: mgr
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: placement:
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  hosts:
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-0
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-1
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  - compute-2
Oct 12 16:55:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct 12 16:55:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:55:36 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:55:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:55:37
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] No pools available
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:55:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:55:37 np0005481680 ceph-mon[73608]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 12 16:55:37 np0005481680 ceph-mon[73608]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 12 16:55:37 np0005481680 ceph-mon[73608]: Deploying daemon crash.compute-1 on compute-1
Oct 12 16:55:37 np0005481680 ceph-mon[73608]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 12 16:55:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:38 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 5c744f09-a39b-4297-8156-f098465d0d51 (Updating crash deployment (+1 -> 2))
Oct 12 16:55:38 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 5c744f09-a39b-4297-8156-f098465d0d51 (Updating crash deployment (+1 -> 2)) in 2 seconds
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.06979926 +0000 UTC m=+0.058569113 container create 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 16:55:39 np0005481680 systemd[1]: Started libpod-conmon-25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2.scope.
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.047363051 +0000 UTC m=+0.036132894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.168917618 +0000 UTC m=+0.157687521 container init 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.174529263 +0000 UTC m=+0.163299106 container start 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.177585102 +0000 UTC m=+0.166354985 container attach 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:39 np0005481680 objective_agnesi[80122]: 167 167
Oct 12 16:55:39 np0005481680 systemd[1]: libpod-25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2.scope: Deactivated successfully.
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.179840929 +0000 UTC m=+0.168610772 container died 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:55:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cd5be61b6aab39d8c2f758b2c3268179625d90052e3d45dfeb60dd5a8c31fdd8-merged.mount: Deactivated successfully.
Oct 12 16:55:39 np0005481680 podman[80106]: 2025-10-12 20:55:39.217877222 +0000 UTC m=+0.206647075 container remove 25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:39 np0005481680 systemd[1]: libpod-conmon-25f0dacf0246973cc78cacfcbd3c0b5e7a1aba24515877186143c94c3ebf26b2.scope: Deactivated successfully.
Oct 12 16:55:39 np0005481680 podman[80144]: 2025-10-12 20:55:39.398461723 +0000 UTC m=+0.045448204 container create c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 16:55:39 np0005481680 systemd[1]: Started libpod-conmon-c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8.scope.
Oct 12 16:55:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:39 np0005481680 podman[80144]: 2025-10-12 20:55:39.3789605 +0000 UTC m=+0.025947011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:39 np0005481680 podman[80144]: 2025-10-12 20:55:39.488895377 +0000 UTC m=+0.135881908 container init c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:39 np0005481680 podman[80144]: 2025-10-12 20:55:39.498268239 +0000 UTC m=+0.145254730 container start c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:39 np0005481680 podman[80144]: 2025-10-12 20:55:39.501534533 +0000 UTC m=+0.148521024 container attach c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:55:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:39 np0005481680 upbeat_morse[80161]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:55:39 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:39 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:39 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 47abdfbc-9d1c-416d-8d2d-2f925f341a02
Oct 12 16:55:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02"} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4025209433' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02"}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4025209433' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02"}]': finished
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ab97d633-9f80-4349-9abb-a96e33b69914"} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/319345381' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ab97d633-9f80-4349-9abb-a96e33b69914"}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/319345381' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ab97d633-9f80-4349-9abb-a96e33b69914"}]': finished
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:40 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/4025209433' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02"}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/4025209433' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02"}]': finished
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/319345381' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ab97d633-9f80-4349-9abb-a96e33b69914"}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/319345381' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ab97d633-9f80-4349-9abb-a96e33b69914"}]': finished
Oct 12 16:55:40 np0005481680 lvm[80226]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:55:40 np0005481680 lvm[80226]: VG ceph_vg0 finished
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4048515774' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 12 16:55:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117150654' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: stderr: got monmap epoch 1
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: --> Creating keyring file for osd.0
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 12 16:55:40 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 47abdfbc-9d1c-416d-8d2d-2f925f341a02 --setuser ceph --setgroup ceph
Oct 12 16:55:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 2 completed events
Oct 12 16:55:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:55:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:42 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 12 16:55:43 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:43 np0005481680 ceph-mon[73608]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: stderr: 2025-10-12T20:55:41.033+0000 7fe7f75a7740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: stderr: 2025-10-12T20:55:41.295+0000 7fe7f75a7740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:43 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 12 16:55:44 np0005481680 upbeat_morse[80161]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:44 np0005481680 upbeat_morse[80161]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 12 16:55:44 np0005481680 upbeat_morse[80161]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 12 16:55:44 np0005481680 systemd[1]: libpod-c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8.scope: Deactivated successfully.
Oct 12 16:55:44 np0005481680 systemd[1]: libpod-c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8.scope: Consumed 2.123s CPU time.
Oct 12 16:55:44 np0005481680 podman[80144]: 2025-10-12 20:55:44.036740576 +0000 UTC m=+4.683727097 container died c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7ca29223eedd6eff044453f19132e82abb8685fc800f82a51929b4ba88f8cfe0-merged.mount: Deactivated successfully.
Oct 12 16:55:44 np0005481680 podman[80144]: 2025-10-12 20:55:44.092827863 +0000 UTC m=+4.739814364 container remove c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_morse, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:55:44 np0005481680 systemd[1]: libpod-conmon-c7bda50b00d7800974dba6b8e2a4ac57280d0c181f95001bddc82bbd0181caf8.scope: Deactivated successfully.
Oct 12 16:55:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.633619629 +0000 UTC m=+0.036478833 container create 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:44 np0005481680 systemd[1]: Started libpod-conmon-77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209.scope.
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.616986495 +0000 UTC m=+0.019845729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.730625998 +0000 UTC m=+0.133485212 container init 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.736536061 +0000 UTC m=+0.139395255 container start 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.740207998 +0000 UTC m=+0.143067192 container attach 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:55:44 np0005481680 cranky_carson[81244]: 167 167
Oct 12 16:55:44 np0005481680 systemd[1]: libpod-77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209.scope: Deactivated successfully.
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.74185149 +0000 UTC m=+0.144710684 container died 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3bafda852dbed522a6bf39b0d36847c08a95e274fd0b48ebbc8c0c3d27b9cffd-merged.mount: Deactivated successfully.
Oct 12 16:55:44 np0005481680 podman[81228]: 2025-10-12 20:55:44.790120788 +0000 UTC m=+0.192979982 container remove 77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:55:44 np0005481680 systemd[1]: libpod-conmon-77b54b8138cef2dbbcb7d3a3fff89a06582754d0b8ee0828c84e954bbf590209.scope: Deactivated successfully.
Oct 12 16:55:44 np0005481680 podman[81268]: 2025-10-12 20:55:44.938592919 +0000 UTC m=+0.038655978 container create c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 16:55:44 np0005481680 systemd[1]: Started libpod-conmon-c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0.scope.
Oct 12 16:55:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778736b1d0ddbebd064c70c77d648dd515811993ee1c461c88ad7b9c8acf65ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778736b1d0ddbebd064c70c77d648dd515811993ee1c461c88ad7b9c8acf65ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778736b1d0ddbebd064c70c77d648dd515811993ee1c461c88ad7b9c8acf65ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778736b1d0ddbebd064c70c77d648dd515811993ee1c461c88ad7b9c8acf65ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:45.012891276 +0000 UTC m=+0.112954385 container init c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:44.919758228 +0000 UTC m=+0.019821317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:45.024382236 +0000 UTC m=+0.124445305 container start c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:45.027339993 +0000 UTC m=+0.127403102 container attach c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:45 np0005481680 competent_tesla[81284]: {
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:    "0": [
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:        {
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "devices": [
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "/dev/loop3"
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            ],
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "lv_name": "ceph_lv0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "lv_size": "21470642176",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "name": "ceph_lv0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "tags": {
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.cluster_name": "ceph",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.crush_device_class": "",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.encrypted": "0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.osd_id": "0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.type": "block",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.vdo": "0",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:                "ceph.with_tpm": "0"
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            },
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "type": "block",
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:            "vg_name": "ceph_vg0"
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:        }
Oct 12 16:55:45 np0005481680 competent_tesla[81284]:    ]
Oct 12 16:55:45 np0005481680 competent_tesla[81284]: }
Oct 12 16:55:45 np0005481680 systemd[1]: libpod-c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0.scope: Deactivated successfully.
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:45.284505948 +0000 UTC m=+0.384569057 container died c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 16:55:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-778736b1d0ddbebd064c70c77d648dd515811993ee1c461c88ad7b9c8acf65ce-merged.mount: Deactivated successfully.
Oct 12 16:55:45 np0005481680 podman[81268]: 2025-10-12 20:55:45.343956618 +0000 UTC m=+0.444019687 container remove c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:45 np0005481680 systemd[1]: libpod-conmon-c21c1706908e5b6a841ac355739dcad848c2f0352064363bfad90a2145c3f3e0.scope: Deactivated successfully.
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:45 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 12 16:55:45 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:55:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:55:45 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Oct 12 16:55:45 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Oct 12 16:55:45 np0005481680 podman[81394]: 2025-10-12 20:55:45.969141887 +0000 UTC m=+0.056033622 container create db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:46 np0005481680 systemd[1]: Started libpod-conmon-db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d.scope.
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:45.9446893 +0000 UTC m=+0.031581105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:46.063232331 +0000 UTC m=+0.150124056 container init db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:46.074423863 +0000 UTC m=+0.161315568 container start db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:46 np0005481680 heuristic_keller[81411]: 167 167
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:46.077965935 +0000 UTC m=+0.164857690 container attach db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 16:55:46 np0005481680 systemd[1]: libpod-db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d.scope: Deactivated successfully.
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:46.079637298 +0000 UTC m=+0.166529033 container died db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f4e75435865fffd73df920b18b50b36b39f3b842ef7e0f403561b4282364ec6d-merged.mount: Deactivated successfully.
Oct 12 16:55:46 np0005481680 podman[81394]: 2025-10-12 20:55:46.125587776 +0000 UTC m=+0.212479511 container remove db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_keller, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 16:55:46 np0005481680 systemd[1]: libpod-conmon-db45a4716cbf43301ac1e6b2817dd8f6fd0260ebf74148c51d6f1e613a9d380d.scope: Deactivated successfully.
Oct 12 16:55:46 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 12 16:55:46 np0005481680 ceph-mon[73608]: Deploying daemon osd.0 on compute-0
Oct 12 16:55:46 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.486529327 +0000 UTC m=+0.070424837 container create 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct 12 16:55:46 np0005481680 systemd[1]: Started libpod-conmon-373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539.scope.
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.459386589 +0000 UTC m=+0.043282139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.594794439 +0000 UTC m=+0.178689979 container init 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.61324665 +0000 UTC m=+0.197142150 container start 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.617112341 +0000 UTC m=+0.201007851 container attach 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:55:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test[81458]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 12 16:55:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test[81458]:                            [--no-systemd] [--no-tmpfs]
Oct 12 16:55:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test[81458]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 12 16:55:46 np0005481680 systemd[1]: libpod-373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539.scope: Deactivated successfully.
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.812652399 +0000 UTC m=+0.396547879 container died 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4611b891246fdcdf5b98bcd834b24f276e3a8806b28d773c558eae72ee86fe82-merged.mount: Deactivated successfully.
Oct 12 16:55:46 np0005481680 podman[81442]: 2025-10-12 20:55:46.866333008 +0000 UTC m=+0.450228508 container remove 373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:46 np0005481680 systemd[1]: libpod-conmon-373f6082ee0ab6d91d8ae897de2ebb425431b0bef3e49203b66f9334930cf539.scope: Deactivated successfully.
Oct 12 16:55:47 np0005481680 systemd[1]: Reloading.
Oct 12 16:55:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:55:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:55:47 np0005481680 ceph-mon[73608]: Deploying daemon osd.1 on compute-1
Oct 12 16:55:47 np0005481680 systemd[1]: Reloading.
Oct 12 16:55:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:55:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:55:47 np0005481680 systemd[1]: Starting Ceph osd.0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:55:48 np0005481680 podman[81619]: 2025-10-12 20:55:48.017650575 +0000 UTC m=+0.064176544 container create 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:48 np0005481680 podman[81619]: 2025-10-12 20:55:47.994508462 +0000 UTC m=+0.041034431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:48 np0005481680 podman[81619]: 2025-10-12 20:55:48.127238733 +0000 UTC m=+0.173764772 container init 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:55:48 np0005481680 podman[81619]: 2025-10-12 20:55:48.14058662 +0000 UTC m=+0.187112559 container start 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:48 np0005481680 podman[81619]: 2025-10-12 20:55:48.144321048 +0000 UTC m=+0.190847037 container attach 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:55:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 bash[81619]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 bash[81619]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 lvm[81717]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:55:48 np0005481680 lvm[81717]: VG ceph_vg0 finished
Oct 12 16:55:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 12 16:55:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 bash[81619]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 12 16:55:48 np0005481680 bash[81619]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:48 np0005481680 bash[81619]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:49 np0005481680 bash[81619]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 12 16:55:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate[81635]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 12 16:55:49 np0005481680 bash[81619]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 12 16:55:49 np0005481680 systemd[1]: libpod-28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19.scope: Deactivated successfully.
Oct 12 16:55:49 np0005481680 systemd[1]: libpod-28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19.scope: Consumed 1.612s CPU time.
Oct 12 16:55:49 np0005481680 podman[81812]: 2025-10-12 20:55:49.54634512 +0000 UTC m=+0.048256388 container died 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 16:55:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e9a23f4340f0ba2f432ee0266e828575c738d4b78376bf021c8d64f2da6c5fc4-merged.mount: Deactivated successfully.
Oct 12 16:55:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:49 np0005481680 podman[81812]: 2025-10-12 20:55:49.596613511 +0000 UTC m=+0.098524739 container remove 28057597a32d1b9d361437e13227b55551ac3a32a7d3f00340be215d82797f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:49 np0005481680 podman[81872]: 2025-10-12 20:55:49.827826949 +0000 UTC m=+0.045987300 container create 3ed66bc5610ee3df448b464e59bdffd7a195cc524e15fbba6db1f702431acf31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af056350c940eeb5cb4f342d130bbdbad475187599f3c8d2e1330168dfe5edb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af056350c940eeb5cb4f342d130bbdbad475187599f3c8d2e1330168dfe5edb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af056350c940eeb5cb4f342d130bbdbad475187599f3c8d2e1330168dfe5edb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af056350c940eeb5cb4f342d130bbdbad475187599f3c8d2e1330168dfe5edb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af056350c940eeb5cb4f342d130bbdbad475187599f3c8d2e1330168dfe5edb9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:49 np0005481680 podman[81872]: 2025-10-12 20:55:49.808259709 +0000 UTC m=+0.026420110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:49 np0005481680 podman[81872]: 2025-10-12 20:55:49.907453155 +0000 UTC m=+0.125613516 container init 3ed66bc5610ee3df448b464e59bdffd7a195cc524e15fbba6db1f702431acf31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:49 np0005481680 podman[81872]: 2025-10-12 20:55:49.923971146 +0000 UTC m=+0.142131497 container start 3ed66bc5610ee3df448b464e59bdffd7a195cc524e15fbba6db1f702431acf31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:49 np0005481680 bash[81872]: 3ed66bc5610ee3df448b464e59bdffd7a195cc524e15fbba6db1f702431acf31
Oct 12 16:55:49 np0005481680 systemd[1]: Started Ceph osd.0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: pidfile_write: ignore empty --pid-file
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:49 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:50 np0005481680 python3[81981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:55:50 np0005481680 podman[81997]: 2025-10-12 20:55:50.463649006 +0000 UTC m=+0.044857021 container create 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:50 np0005481680 systemd[1]: Started libpod-conmon-48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345.scope.
Oct 12 16:55:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:50 np0005481680 podman[81997]: 2025-10-12 20:55:50.444319643 +0000 UTC m=+0.025527678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:55:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ae72ab97f10b2a2e90c2c82623c2815acb97dffa1d61c4dfe45bec8151381/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ae72ab97f10b2a2e90c2c82623c2815acb97dffa1d61c4dfe45bec8151381/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24ae72ab97f10b2a2e90c2c82623c2815acb97dffa1d61c4dfe45bec8151381/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:50 np0005481680 podman[81997]: 2025-10-12 20:55:50.566715863 +0000 UTC m=+0.147923888 container init 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:50 np0005481680 podman[81997]: 2025-10-12 20:55:50.5792383 +0000 UTC m=+0.160446335 container start 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:55:50 np0005481680 podman[81997]: 2025-10-12 20:55:50.583004278 +0000 UTC m=+0.164212313 container attach 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.798478566 +0000 UTC m=+0.055105288 container create c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:50 np0005481680 systemd[1]: Started libpod-conmon-c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f.scope.
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:50 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.77180806 +0000 UTC m=+0.028434832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.882659501 +0000 UTC m=+0.139286203 container init c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.890029733 +0000 UTC m=+0.146656435 container start c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:50 np0005481680 keen_colden[82087]: 167 167
Oct 12 16:55:50 np0005481680 systemd[1]: libpod-c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f.scope: Deactivated successfully.
Oct 12 16:55:50 np0005481680 conmon[82087]: conmon c83df91ed98e386e143d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f.scope/container/memory.events
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.893360829 +0000 UTC m=+0.149987521 container attach c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.903997237 +0000 UTC m=+0.160623939 container died c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:55:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cd2625dace5dee5db5f5b7ccff180da2d5531dc3b8269ef25425f3722bd672da-merged.mount: Deactivated successfully.
Oct 12 16:55:50 np0005481680 podman[82050]: 2025-10-12 20:55:50.945995902 +0000 UTC m=+0.202622604 container remove c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_colden, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:55:50 np0005481680 systemd[1]: libpod-conmon-c83df91ed98e386e143d738761ea0d17c17a9fcb9077969db898d4100501025f.scope: Deactivated successfully.
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 12 16:55:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172722304' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 12 16:55:51 np0005481680 agitated_mendeleev[82028]: 
Oct 12 16:55:51 np0005481680 agitated_mendeleev[82028]: {"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":91,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1760302540,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-12T20:54:17:378101+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-12T20:55:40.057511+0000","services":{}},"progress_events":{}}
Oct 12 16:55:51 np0005481680 systemd[1]: libpod-48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345.scope: Deactivated successfully.
Oct 12 16:55:51 np0005481680 podman[81997]: 2025-10-12 20:55:51.064931323 +0000 UTC m=+0.646139358 container died 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c24ae72ab97f10b2a2e90c2c82623c2815acb97dffa1d61c4dfe45bec8151381-merged.mount: Deactivated successfully.
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:51 np0005481680 podman[81997]: 2025-10-12 20:55:51.122783661 +0000 UTC m=+0.703991676 container remove 48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345 (image=quay.io/ceph/ceph:v19, name=agitated_mendeleev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:55:51 np0005481680 systemd[1]: libpod-conmon-48d35448afde3e13268039b367d2ed55838c3c460efc62c256cecdcda9e8d345.scope: Deactivated successfully.
Oct 12 16:55:51 np0005481680 podman[82124]: 2025-10-12 20:55:51.153274716 +0000 UTC m=+0.044944083 container create 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 12 16:55:51 np0005481680 systemd[1]: Started libpod-conmon-4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7.scope.
Oct 12 16:55:51 np0005481680 podman[82124]: 2025-10-12 20:55:51.131257282 +0000 UTC m=+0.022926699 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29fe2287af5f55cf9344b79dd89dc683e2427828299fa3190cba13d5cdc4c78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29fe2287af5f55cf9344b79dd89dc683e2427828299fa3190cba13d5cdc4c78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29fe2287af5f55cf9344b79dd89dc683e2427828299fa3190cba13d5cdc4c78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29fe2287af5f55cf9344b79dd89dc683e2427828299fa3190cba13d5cdc4c78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:51 np0005481680 podman[82124]: 2025-10-12 20:55:51.247762489 +0000 UTC m=+0.139431896 container init 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 16:55:51 np0005481680 podman[82124]: 2025-10-12 20:55:51.266225851 +0000 UTC m=+0.157895218 container start 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:51 np0005481680 podman[82124]: 2025-10-12 20:55:51.271503528 +0000 UTC m=+0.163172935 container attach 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d253c59800 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: load: jerasure load: lrc 
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 12 16:55:51 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:52 np0005481680 lvm[82229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:55:52 np0005481680 lvm[82229]: VG ceph_vg0 finished
Oct 12 16:55:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:52 np0005481680 modest_fermi[82145]: {}
Oct 12 16:55:52 np0005481680 systemd[1]: libpod-4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7.scope: Deactivated successfully.
Oct 12 16:55:52 np0005481680 systemd[1]: libpod-4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7.scope: Consumed 1.350s CPU time.
Oct 12 16:55:52 np0005481680 podman[82124]: 2025-10-12 20:55:52.143857762 +0000 UTC m=+1.035527159 container died 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:55:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c29fe2287af5f55cf9344b79dd89dc683e2427828299fa3190cba13d5cdc4c78-merged.mount: Deactivated successfully.
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:52 np0005481680 podman[82124]: 2025-10-12 20:55:52.203988119 +0000 UTC m=+1.095657516 container remove 4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 12 16:55:52 np0005481680 systemd[1]: libpod-conmon-4cec17c818b18acdf12d564ead94e1e2f6554edc0ab279034fa4ff619663ead7.scope: Deactivated successfully.
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount shared_bdev_used = 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: RocksDB version: 7.9.2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Git sha 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DB SUMMARY
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DB Session ID:  OWFIQ77A3LRVDFE8T4K3
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: CURRENT file:  CURRENT
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: IDENTITY file:  IDENTITY
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.error_if_exists: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.create_if_missing: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.paranoid_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                     Options.env: 0x55d254ac5dc0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                Options.info_log: 0x55d254ac97a0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_file_opening_threads: 16
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.statistics: (nil)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.use_fsync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.max_log_file_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.allow_fallocate: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.use_direct_reads: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.create_missing_column_families: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.db_log_dir: 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                 Options.wal_dir: db.wal
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.advise_random_on_open: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.write_buffer_manager: 0x55d254bbea00
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                            Options.rate_limiter: (nil)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.unordered_write: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.row_cache: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.wal_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.allow_ingest_behind: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.two_write_queues: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.manual_wal_flush: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.wal_compression: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.atomic_flush: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.log_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.allow_data_in_errors: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.db_host_id: __hostname__
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_background_jobs: 4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_background_compactions: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_subcompactions: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.max_open_files: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.max_background_flushes: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Compression algorithms supported:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZSTD supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kXpressCompression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kBZip2Compression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kLZ4Compression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZlibCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kSnappyCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4b1c027e-76b1-4836-be35-004aee77ed34
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552560492, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552560771, "job": 1, "event": "recovery_finished"}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: freelist init
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: freelist _read_cfg
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs umount
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) close
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bdev(0x55d254af5000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluefs mount shared_bdev_used = 4718592
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: RocksDB version: 7.9.2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Git sha 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DB SUMMARY
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DB Session ID:  OWFIQ77A3LRVDFE8T4K2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: CURRENT file:  CURRENT
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: IDENTITY file:  IDENTITY
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.error_if_exists: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.create_if_missing: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.paranoid_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                     Options.env: 0x55d254c62310
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                Options.info_log: 0x55d254ac9920
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_file_opening_threads: 16
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.statistics: (nil)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.use_fsync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.max_log_file_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.allow_fallocate: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.use_direct_reads: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.create_missing_column_families: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.db_log_dir: 
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                                 Options.wal_dir: db.wal
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.advise_random_on_open: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.write_buffer_manager: 0x55d254bbea00
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                            Options.rate_limiter: (nil)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.unordered_write: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.row_cache: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                              Options.wal_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.allow_ingest_behind: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.two_write_queues: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.manual_wal_flush: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.wal_compression: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.atomic_flush: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.log_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.allow_data_in_errors: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.db_host_id: __hostname__
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_background_jobs: 4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_background_compactions: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_subcompactions: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.max_open_files: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.max_background_flushes: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Compression algorithms supported:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZSTD supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kXpressCompression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kBZip2Compression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kLZ4Compression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kZlibCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: #011kSnappyCompression supported: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cef350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:           Options.merge_operator: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.compaction_filter_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.sst_partitioner_factory: None
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d254ac9ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d253cee9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.write_buffer_size: 16777216
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.max_write_buffer_number: 64
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.compression: LZ4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.num_levels: 7
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.level: 32767
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.compression_opts.strategy: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                  Options.compression_opts.enabled: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.arena_block_size: 1048576
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.disable_auto_compactions: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.inplace_update_support: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.bloom_locality: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                    Options.max_successive_merges: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.paranoid_file_checks: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.force_consistency_checks: 1
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.report_bg_io_stats: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                               Options.ttl: 2592000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                       Options.enable_blob_files: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                           Options.min_blob_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                          Options.blob_file_size: 268435456
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb:                Options.blob_file_starting_level: 0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4b1c027e-76b1-4836-be35-004aee77ed34
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552813575, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552817630, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302552, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4b1c027e-76b1-4836-be35-004aee77ed34", "db_session_id": "OWFIQ77A3LRVDFE8T4K2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552820499, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302552, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4b1c027e-76b1-4836-be35-004aee77ed34", "db_session_id": "OWFIQ77A3LRVDFE8T4K2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552823547, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302552, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4b1c027e-76b1-4836-be35-004aee77ed34", "db_session_id": "OWFIQ77A3LRVDFE8T4K2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302552825132, "job": 1, "event": "recovery_finished"}
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d254cb4000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: DB pointer 0x55d254c70000
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: _get_class not permitted to load lua
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: _get_class not permitted to load sdk
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 load_pgs
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 load_pgs opened 0 pgs
Oct 12 16:55:52 np0005481680 ceph-osd[81892]: osd.0 0 log_to_monitors true
Oct 12 16:55:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0[81888]: 2025-10-12T20:55:52.853+0000 7f6cf7abd740 -1 osd.0 0 log_to_monitors true
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct 12 16:55:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:53 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:53 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:53 np0005481680 podman[82808]: 2025-10-12 20:55:53.240122673 +0000 UTC m=+0.066387131 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:53 np0005481680 podman[82808]: 2025-10-12 20:55:53.327194604 +0000 UTC m=+0.153459092 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 12 16:55:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 done with init, starting boot process
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 start_boot
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 12 16:55:54 np0005481680 ceph-osd[81892]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/762295079; not ready for session (expect reconnect)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:54 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.395254879 +0000 UTC m=+0.059495001 container create 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:54 np0005481680 systemd[1]: Started libpod-conmon-7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468.scope.
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.363041429 +0000 UTC m=+0.027281611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.511637194 +0000 UTC m=+0.175877316 container init 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.523233366 +0000 UTC m=+0.187473498 container start 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:55:54 np0005481680 fervent_shamir[83001]: 167 167
Oct 12 16:55:54 np0005481680 systemd[1]: libpod-7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468.scope: Deactivated successfully.
Oct 12 16:55:54 np0005481680 conmon[83001]: conmon 7d421e8c85ac0b00cea7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468.scope/container/memory.events
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.535340982 +0000 UTC m=+0.199581084 container attach 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.536635025 +0000 UTC m=+0.200875147 container died 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:55:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-784987c96115b6e785779f03502037b92ea08bc6fe12195f4ac6635fcc8b52a1-merged.mount: Deactivated successfully.
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:55:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:54 np0005481680 podman[82985]: 2025-10-12 20:55:54.643136852 +0000 UTC m=+0.307376974 container remove 7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:55:54 np0005481680 systemd[1]: libpod-conmon-7d421e8c85ac0b00cea7204021b1a615e32def30c1bc35d792ebfad477cec468.scope: Deactivated successfully.
Oct 12 16:55:54 np0005481680 podman[83024]: 2025-10-12 20:55:54.887546355 +0000 UTC m=+0.061588117 container create e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:55:54 np0005481680 podman[83024]: 2025-10-12 20:55:54.864999777 +0000 UTC m=+0.039041539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:55:54 np0005481680 systemd[1]: Started libpod-conmon-e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4.scope.
Oct 12 16:55:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:55:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859669b2634fe9adbef760257591fb066c115cfa7543ac2b57fee4c22a36b878/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859669b2634fe9adbef760257591fb066c115cfa7543ac2b57fee4c22a36b878/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859669b2634fe9adbef760257591fb066c115cfa7543ac2b57fee4c22a36b878/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/859669b2634fe9adbef760257591fb066c115cfa7543ac2b57fee4c22a36b878/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:55:55 np0005481680 podman[83024]: 2025-10-12 20:55:55.055178355 +0000 UTC m=+0.229220147 container init e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:55 np0005481680 podman[83024]: 2025-10-12 20:55:55.069932579 +0000 UTC m=+0.243974341 container start e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:55:55 np0005481680 podman[83024]: 2025-10-12 20:55:55.086989894 +0000 UTC m=+0.261031696 container attach e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 16:55:55 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:55 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/762295079; not ready for session (expect reconnect)
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:55 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:55 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: from='osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: from='osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 12 16:55:55 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]: [
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:    {
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "available": false,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "being_replaced": false,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "ceph_device_lvm": false,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "lsm_data": {},
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "lvs": [],
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "path": "/dev/sr0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "rejected_reasons": [
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "Insufficient space (<5GB)",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "Has a FileSystem"
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        ],
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        "sys_api": {
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "actuators": null,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "device_nodes": [
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:                "sr0"
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            ],
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "devname": "sr0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "human_readable_size": "482.00 KB",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "id_bus": "ata",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "model": "QEMU DVD-ROM",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "nr_requests": "2",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "parent": "/dev/sr0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "partitions": {},
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "path": "/dev/sr0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "removable": "1",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "rev": "2.5+",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "ro": "0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "rotational": "0",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "sas_address": "",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "sas_device_handle": "",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "scheduler_mode": "mq-deadline",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "sectors": 0,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "sectorsize": "2048",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "size": 493568.0,
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "support_discard": "2048",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "type": "disk",
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:            "vendor": "QEMU"
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:        }
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]:    }
Oct 12 16:55:55 np0005481680 dazzling_germain[83041]: ]
Oct 12 16:55:55 np0005481680 systemd[1]: libpod-e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4.scope: Deactivated successfully.
Oct 12 16:55:56 np0005481680 podman[84084]: 2025-10-12 20:55:56.000718817 +0000 UTC m=+0.027970521 container died e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:55:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-859669b2634fe9adbef760257591fb066c115cfa7543ac2b57fee4c22a36b878-merged.mount: Deactivated successfully.
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:56 np0005481680 podman[84084]: 2025-10-12 20:55:56.087047098 +0000 UTC m=+0.114298792 container remove e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 16:55:56 np0005481680 systemd[1]: libpod-conmon-e615b083b563720d388d2995ee4105bad0d6853962c4676e62ae7ea8134c8df4.scope: Deactivated successfully.
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/762295079; not ready for session (expect reconnect)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Oct 12 16:55:56 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:55:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:57 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:57 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/762295079; not ready for session (expect reconnect)
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:57 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:55:57 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/762295079; not ready for session (expect reconnect)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079] boot
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:55:58 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 12 16:55:58 np0005481680 ceph-mon[73608]: OSD bench result of 10786.227308 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.177 iops: 2349.436 elapsed_sec: 1.277
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [WRN] : OSD bench result of 2349.435761 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 0 waiting for initial osdmap
Oct 12 16:55:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0[81888]: 2025-10-12T20:55:58.653+0000 7f6cf3a40640 -1 osd.0 0 waiting for initial osdmap
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 12 16:55:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-osd-0[81888]: 2025-10-12T20:55:58.713+0000 7f6cef068640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 12 16:55:58 np0005481680 ceph-osd[81892]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 12 16:55:59 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/477320606; not ready for session (expect reconnect)
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:59 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 12 16:55:59 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] creating mgr pool
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606] boot
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:55:59 np0005481680 ceph-osd[81892]: osd.0 9 state: booting -> active
Oct 12 16:55:59 np0005481680 ceph-osd[81892]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 12 16:55:59 np0005481680 ceph-osd[81892]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 12 16:55:59 np0005481680 ceph-osd[81892]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: osd.1 [v2:192.168.122.101:6800/762295079,v1:192.168.122.101:6801/762295079] boot
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: OSD bench result of 2349.435761 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:55:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: osd.0 [v2:192.168.122.100:6802/477320606,v1:192.168.122.100:6803/477320606] boot
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 12 16:56:00 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] creating main.db for devicehealth
Oct 12 16:56:00 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 16:56:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmjeht(active, since 84s)
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Oct 12 16:56:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct 12 16:56:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:02 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 12 16:56:03 np0005481680 ceph-mon[73608]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 12 16:56:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:56:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:56:13 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:13 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:14 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:14 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:15 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:56:15 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:56:15 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:56:15 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:56:15 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:56:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:16 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 8ea21502-de10-430a-916b-e4ba13b1b5b4 (Updating mon deployment (+2 -> 3))
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:16 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct 12 16:56:16 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 12 16:56:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: Deploying daemon mon.compute-2 on compute-2
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 12 16:56:17 np0005481680 ceph-mon[73608]: Cluster is now healthy
Oct 12 16:56:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct 12 16:56:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct 12 16:56:19 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:19 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct 12 16:56:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:20 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:20 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:21 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:21 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:21 np0005481680 python3[84141]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:21 np0005481680 podman[84143]: 2025-10-12 20:56:21.653252028 +0000 UTC m=+0.078468307 container create d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 12 16:56:21 np0005481680 systemd[1]: Started libpod-conmon-d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872.scope.
Oct 12 16:56:21 np0005481680 podman[84143]: 2025-10-12 20:56:21.622007734 +0000 UTC m=+0.047224083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effa86df0ba2facc662e752e909a66be99af805aa23b148ebd1c9122accd3bbe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effa86df0ba2facc662e752e909a66be99af805aa23b148ebd1c9122accd3bbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effa86df0ba2facc662e752e909a66be99af805aa23b148ebd1c9122accd3bbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 12 16:56:21 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:21 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 12 16:56:21 np0005481680 podman[84143]: 2025-10-12 20:56:21.754419876 +0000 UTC m=+0.179636185 container init d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 16:56:21 np0005481680 podman[84143]: 2025-10-12 20:56:21.766411708 +0000 UTC m=+0.191627987 container start d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:21 np0005481680 podman[84143]: 2025-10-12 20:56:21.770393062 +0000 UTC m=+0.195609371 container attach d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 16:56:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 12 16:56:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 12 16:56:22 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:22 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:22 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:22 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct 12 16:56:23 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:23 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 12 16:56:23 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:23 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : last_changed 2025-10-12T20:56:19.513865+0000
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : created 2025-10-12T20:54:15.161334+0000
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmjeht(active, since 107s)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 8ea21502-de10-430a-916b-e4ba13b1b5b4 (Updating mon deployment (+2 -> 3))
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 8ea21502-de10-430a-916b-e4ba13b1b5b4 (Updating mon deployment (+2 -> 3)) in 8 seconds
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 8626c33a-1835-42e0-b9ac-9a73db35e246 (Updating mgr deployment (+2 -> 3))
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: Deploying daemon mon.compute-1 on compute-1
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0 calling monitor election
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-2 calling monitor election
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:24 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459852734' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 12 16:56:25 np0005481680 sweet_elgamal[84159]: 
Oct 12 16:56:25 np0005481680 sweet_elgamal[84159]: {"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1760302559,"num_in_osds":2,"osd_in_since":1760302540,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894627840,"bytes_avail":42046656512,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-10-12T20:54:17:378101+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-12T20:55:40.057511+0000","services":{}},"progress_events":{"8ea21502-de10-430a-916b-e4ba13b1b5b4":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Oct 12 16:56:25 np0005481680 systemd[1]: libpod-d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872.scope: Deactivated successfully.
Oct 12 16:56:25 np0005481680 podman[84143]: 2025-10-12 20:56:25.291870762 +0000 UTC m=+3.717087061 container died d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay-effa86df0ba2facc662e752e909a66be99af805aa23b148ebd1c9122accd3bbe-merged.mount: Deactivated successfully.
Oct 12 16:56:25 np0005481680 podman[84143]: 2025-10-12 20:56:25.344396501 +0000 UTC m=+3.769612810 container remove d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872 (image=quay.io/ceph/ceph:v19, name=sweet_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:25 np0005481680 systemd[1]: libpod-conmon-d6770dc05de47070700979c19d063c31b72199f1c07ec65be0d8e2fa67623872.scope: Deactivated successfully.
Oct 12 16:56:25 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4059562679; not ready for session (expect reconnect)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: Deploying daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct 12 16:56:25 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:25 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 12 16:56:25 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct 12 16:56:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:25 np0005481680 python3[84223]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:26 np0005481680 podman[84224]: 2025-10-12 20:56:26.043350204 +0000 UTC m=+0.071597417 container create f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 16:56:26 np0005481680 systemd[1]: Started libpod-conmon-f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553.scope.
Oct 12 16:56:26 np0005481680 podman[84224]: 2025-10-12 20:56:26.015092908 +0000 UTC m=+0.043340181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:26 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f309182705b983c8c9df64d5c28ff930d5c5bee76c6fd50409da5b52d173d9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f309182705b983c8c9df64d5c28ff930d5c5bee76c6fd50409da5b52d173d9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:26 np0005481680 podman[84224]: 2025-10-12 20:56:26.141448312 +0000 UTC m=+0.169695565 container init f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:26 np0005481680 podman[84224]: 2025-10-12 20:56:26.151777421 +0000 UTC m=+0.180024644 container start f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 16:56:26 np0005481680 podman[84224]: 2025-10-12 20:56:26.156088423 +0000 UTC m=+0.184335646 container attach f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:26 np0005481680 ceph-mgr[73901]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct 12 16:56:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:56:26.518+0000 7f1dfcd41640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:26 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:26 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:27 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 3 completed events
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:27 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:27 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:28 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:28 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:29 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:29 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct 12 16:56:30 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:30 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : last_changed 2025-10-12T20:56:25.747024+0000
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : created 2025-10-12T20:54:15.161334+0000
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fmjeht(active, since 113s)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.orllvh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.orllvh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.orllvh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:30 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.orllvh on compute-1
Oct 12 16:56:30 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.orllvh on compute-1
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0 calling monitor election
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-2 calling monitor election
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-1 calling monitor election
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:30 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.orllvh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:56:31 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1923574782; not ready for session (expect reconnect)
Oct 12 16:56:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:56:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:56:31 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.orllvh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 12 16:56:31 np0005481680 ceph-mon[73608]: Deploying daemon mgr.compute-1.orllvh on compute-1
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1914509900' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:56:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:56:32.752+0000 7f1dfcd41640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 8626c33a-1835-42e0-b9ac-9a73db35e246 (Updating mgr deployment (+2 -> 3))
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 8626c33a-1835-42e0-b9ac-9a73db35e246 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 285faae2-f6c2-49dc-b0b8-52a7e90cfcd4 (Updating crash deployment (+1 -> 3))
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct 12 16:56:32 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1914509900' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1914509900' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct 12 16:56:32 np0005481680 admiring_blackwell[84240]: pool 'vms' created
Oct 12 16:56:32 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct 12 16:56:32 np0005481680 systemd[1]: libpod-f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553.scope: Deactivated successfully.
Oct 12 16:56:32 np0005481680 podman[84224]: 2025-10-12 20:56:32.938427428 +0000 UTC m=+6.966674651 container died f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 16:56:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7f309182705b983c8c9df64d5c28ff930d5c5bee76c6fd50409da5b52d173d9a-merged.mount: Deactivated successfully.
Oct 12 16:56:32 np0005481680 podman[84224]: 2025-10-12 20:56:32.992123259 +0000 UTC m=+7.020370482 container remove f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553 (image=quay.io/ceph/ceph:v19, name=admiring_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:33 np0005481680 systemd[1]: libpod-conmon-f682fb6d5918283874e56af485fdf4aec1268a8245ddbe18b900c56dd9a0a553.scope: Deactivated successfully.
Oct 12 16:56:33 np0005481680 python3[84304]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.409550752 +0000 UTC m=+0.042558901 container create 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 16:56:33 np0005481680 systemd[1]: Started libpod-conmon-90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7.scope.
Oct 12 16:56:33 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.3887785 +0000 UTC m=+0.021786729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa7aae67fc25fc0bc0d89fcb92bfe0a5dc61f402a9b598efc7fe911b5bc9f730/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa7aae67fc25fc0bc0d89fcb92bfe0a5dc61f402a9b598efc7fe911b5bc9f730/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.505597926 +0000 UTC m=+0.138606105 container init 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.511110559 +0000 UTC m=+0.144118698 container start 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.515596057 +0000 UTC m=+0.148604206 container attach 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/226889804' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/226889804' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct 12 16:56:33 np0005481680 confident_ride[84321]: pool 'volumes' created
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: Deploying daemon crash.compute-2 on compute-2
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1914509900' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:33 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/226889804' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:33 np0005481680 systemd[1]: libpod-90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7.scope: Deactivated successfully.
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.944240962 +0000 UTC m=+0.577249151 container died 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:33 np0005481680 systemd[1]: var-lib-containers-storage-overlay-aa7aae67fc25fc0bc0d89fcb92bfe0a5dc61f402a9b598efc7fe911b5bc9f730-merged.mount: Deactivated successfully.
Oct 12 16:56:33 np0005481680 podman[84305]: 2025-10-12 20:56:33.988081485 +0000 UTC m=+0.621089624 container remove 90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7 (image=quay.io/ceph/ceph:v19, name=confident_ride, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:34 np0005481680 systemd[1]: libpod-conmon-90fd0cdbd8c598300bb2134d936e9b9d1db10da60cea0f2645026cf33fb7e6e7.scope: Deactivated successfully.
Oct 12 16:56:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 13 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:34 np0005481680 python3[84386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.411402031 +0000 UTC m=+0.061992357 container create 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:34 np0005481680 systemd[1]: Started libpod-conmon-872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956.scope.
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.385975298 +0000 UTC m=+0.036565684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:34 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3da0b34a60ab61902aa6d84956b6b67844a8f2a50ac6c5cc2c8cf88cfeecceb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3da0b34a60ab61902aa6d84956b6b67844a8f2a50ac6c5cc2c8cf88cfeecceb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.503781599 +0000 UTC m=+0.154371905 container init 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.51029567 +0000 UTC m=+0.160885976 container start 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.513502193 +0000 UTC m=+0.164092509 container attach 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 285faae2-f6c2-49dc-b0b8-52a7e90cfcd4 (Updating crash deployment (+1 -> 3))
Oct 12 16:56:34 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 285faae2-f6c2-49dc-b0b8-52a7e90cfcd4 (Updating crash deployment (+1 -> 3)) in 2 seconds
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2692018584' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2692018584' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct 12 16:56:34 np0005481680 practical_germain[84402]: pool 'backups' created
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct 12 16:56:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 14 pg[4.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/226889804' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2692018584' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:34 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2692018584' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:34 np0005481680 systemd[1]: libpod-872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956.scope: Deactivated successfully.
Oct 12 16:56:34 np0005481680 podman[84387]: 2025-10-12 20:56:34.946727478 +0000 UTC m=+0.597317784 container died 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:56:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a3da0b34a60ab61902aa6d84956b6b67844a8f2a50ac6c5cc2c8cf88cfeecceb-merged.mount: Deactivated successfully.
Oct 12 16:56:35 np0005481680 podman[84387]: 2025-10-12 20:56:35.006353212 +0000 UTC m=+0.656943508 container remove 872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956 (image=quay.io/ceph/ceph:v19, name=practical_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:35 np0005481680 systemd[1]: libpod-conmon-872fc54cdc203588f10c54d2693da050a9c19f2e5b400c8c4df7b3fae5850956.scope: Deactivated successfully.
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.375969569 +0000 UTC m=+0.057467429 container create 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 12 16:56:35 np0005481680 python3[84544]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:35 np0005481680 systemd[1]: Started libpod-conmon-2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605.scope.
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.34724073 +0000 UTC m=+0.028738620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.471008037 +0000 UTC m=+0.152505927 container init 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.482447335 +0000 UTC m=+0.163945225 container start 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 16:56:35 np0005481680 recursing_germain[84574]: 167 167
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.487782404 +0000 UTC m=+0.169280294 container attach 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.489008536 +0000 UTC m=+0.170506436 container died 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:35 np0005481680 systemd[1]: libpod-2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605.scope: Deactivated successfully.
Oct 12 16:56:35 np0005481680 podman[84573]: 2025-10-12 20:56:35.506498163 +0000 UTC m=+0.072565673 container create 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 16:56:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-930ce52faccc53206725cf3e9835b5378d7950eca48e9a9367435497b1b904f4-merged.mount: Deactivated successfully.
Oct 12 16:56:35 np0005481680 podman[84557]: 2025-10-12 20:56:35.540978471 +0000 UTC m=+0.222476331 container remove 2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 16:56:35 np0005481680 podman[84573]: 2025-10-12 20:56:35.473582654 +0000 UTC m=+0.039650174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:35 np0005481680 systemd[1]: Started libpod-conmon-8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3.scope.
Oct 12 16:56:35 np0005481680 systemd[1]: libpod-conmon-2fd135fd800a52ccfc93f593258e4fc66aa568ded8e50cc8e5fd572046fc2605.scope: Deactivated successfully.
Oct 12 16:56:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eceb4041e352ff58788652549359882e1ef1092d030c73cf7ae3b74022338d01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eceb4041e352ff58788652549359882e1ef1092d030c73cf7ae3b74022338d01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 podman[84573]: 2025-10-12 20:56:35.62761323 +0000 UTC m=+0.193680770 container init 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 16:56:35 np0005481680 podman[84573]: 2025-10-12 20:56:35.632411396 +0000 UTC m=+0.198478876 container start 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 16:56:35 np0005481680 podman[84573]: 2025-10-12 20:56:35.636380098 +0000 UTC m=+0.202447608 container attach 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:56:35 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 5 completed events
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:56:35 np0005481680 podman[84618]: 2025-10-12 20:56:35.826949017 +0000 UTC m=+0.096743154 container create 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:35 np0005481680 systemd[1]: Started libpod-conmon-94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92.scope.
Oct 12 16:56:35 np0005481680 podman[84618]: 2025-10-12 20:56:35.798348191 +0000 UTC m=+0.068142378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:35 np0005481680 podman[84618]: 2025-10-12 20:56:35.926374269 +0000 UTC m=+0.196168446 container init 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct 12 16:56:35 np0005481680 podman[84618]: 2025-10-12 20:56:35.943947957 +0000 UTC m=+0.213742104 container start 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:56:35 np0005481680 podman[84618]: 2025-10-12 20:56:35.949927553 +0000 UTC m=+0.219721690 container attach 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:35 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/606778158' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:36 np0005481680 amazing_jang[84655]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:56:36 np0005481680 amazing_jang[84655]: --> All data devices are unavailable
Oct 12 16:56:36 np0005481680 systemd[1]: libpod-94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92.scope: Deactivated successfully.
Oct 12 16:56:36 np0005481680 podman[84618]: 2025-10-12 20:56:36.305835352 +0000 UTC m=+0.575629469 container died 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2d632f693c004cf9ac17ef183ad4c27efe9ca947be3465e000dce4c36ba4fdb5-merged.mount: Deactivated successfully.
Oct 12 16:56:36 np0005481680 podman[84618]: 2025-10-12 20:56:36.355667911 +0000 UTC m=+0.625462028 container remove 94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:56:36 np0005481680 systemd[1]: libpod-conmon-94af26055de5cde2c76775314abec25a73df498aafe539d68cf7f62f55bb4f92.scope: Deactivated successfully.
Oct 12 16:56:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"} v 0)
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"}]: dispatch
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/606778158' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"}]': finished
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 12 16:56:36 np0005481680 interesting_blackwell[84608]: pool 'images' created
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:36 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:36 np0005481680 systemd[1]: libpod-8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3.scope: Deactivated successfully.
Oct 12 16:56:36 np0005481680 podman[84573]: 2025-10-12 20:56:36.607761034 +0000 UTC m=+1.173828544 container died 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-eceb4041e352ff58788652549359882e1ef1092d030c73cf7ae3b74022338d01-merged.mount: Deactivated successfully.
Oct 12 16:56:36 np0005481680 podman[84573]: 2025-10-12 20:56:36.666815373 +0000 UTC m=+1.232882883 container remove 8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3 (image=quay.io/ceph/ceph:v19, name=interesting_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 16:56:36 np0005481680 systemd[1]: libpod-conmon-8efb964e93cb5c2c83b4cc1ddbde12e8ac57f2f455dd3bb9cc72720da7254ad3.scope: Deactivated successfully.
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/606778158' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/1494438305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"}]: dispatch
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"}]: dispatch
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/606778158' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:36 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3be92c30-27e3-4d50-9d62-4d2c31481440"}]': finished
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.021305655 +0000 UTC m=+0.060681892 container create 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:37 np0005481680 python3[84799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:37 np0005481680 systemd[1]: Started libpod-conmon-04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e.scope.
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:36.994533537 +0000 UTC m=+0.033909784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:37 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.111851226 +0000 UTC m=+0.151227493 container init 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla started
Oct 12 16:56:37 np0005481680 podman[84830]: 2025-10-12 20:56:37.115883121 +0000 UTC m=+0.048952448 container create 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mgr.compute-2.iamnla 192.168.122.102:0/2785834523; not ready for session (expect reconnect)
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.12272976 +0000 UTC m=+0.162105947 container start 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 12 16:56:37 np0005481680 affectionate_greider[84831]: 167 167
Oct 12 16:56:37 np0005481680 systemd[1]: libpod-04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e.scope: Deactivated successfully.
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.128993153 +0000 UTC m=+0.168369440 container attach 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.131800276 +0000 UTC m=+0.171176503 container died 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 16:56:37 np0005481680 systemd[1]: Started libpod-conmon-66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774.scope.
Oct 12 16:56:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5ba8f85cc1d6d34f076991d691180214fceb4cf9ad7d547ba368044f758a106c-merged.mount: Deactivated successfully.
Oct 12 16:56:37 np0005481680 podman[84814]: 2025-10-12 20:56:37.17068523 +0000 UTC m=+0.210061427 container remove 04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:56:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bed080dca610e91d3adb1f9ce217dfc0ea85bab7b4ca9407df257c2bed9562/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bed080dca610e91d3adb1f9ce217dfc0ea85bab7b4ca9407df257c2bed9562/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 systemd[1]: libpod-conmon-04e66cfd34bbaab80fea32dcad934869d96cf423dc7cdeeaf9693d74870a1d3e.scope: Deactivated successfully.
Oct 12 16:56:37 np0005481680 podman[84830]: 2025-10-12 20:56:37.189667256 +0000 UTC m=+0.122736592 container init 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:56:37 np0005481680 podman[84830]: 2025-10-12 20:56:37.099894405 +0000 UTC m=+0.032963751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:37 np0005481680 podman[84830]: 2025-10-12 20:56:37.198824684 +0000 UTC m=+0.131894010 container start 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:37 np0005481680 podman[84830]: 2025-10-12 20:56:37.202420168 +0000 UTC m=+0.135489514 container attach 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:56:37
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:56:37 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.360775586 +0000 UTC m=+0.052543581 container create 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:37 np0005481680 systemd[1]: Started libpod-conmon-5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051.scope.
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.336302968 +0000 UTC m=+0.028071053 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b32793ff082852847640c64f56af017f4479d28d0d0ae608090069f9f77cb7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b32793ff082852847640c64f56af017f4479d28d0d0ae608090069f9f77cb7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b32793ff082852847640c64f56af017f4479d28d0d0ae608090069f9f77cb7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b32793ff082852847640c64f56af017f4479d28d0d0ae608090069f9f77cb7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.450510636 +0000 UTC m=+0.142278721 container init 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.461889532 +0000 UTC m=+0.153657527 container start 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.465350723 +0000 UTC m=+0.157118808 container attach 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1754530205' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:37 np0005481680 magical_thompson[84908]: {
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:    "0": [
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:        {
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "devices": [
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "/dev/loop3"
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            ],
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "lv_name": "ceph_lv0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "lv_size": "21470642176",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "name": "ceph_lv0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "tags": {
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.cluster_name": "ceph",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.crush_device_class": "",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.encrypted": "0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.osd_id": "0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.type": "block",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.vdo": "0",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:                "ceph.with_tpm": "0"
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            },
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "type": "block",
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:            "vg_name": "ceph_vg0"
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:        }
Oct 12 16:56:37 np0005481680 magical_thompson[84908]:    ]
Oct 12 16:56:37 np0005481680 magical_thompson[84908]: }
Oct 12 16:56:37 np0005481680 systemd[1]: libpod-5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051.scope: Deactivated successfully.
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.772350027 +0000 UTC m=+0.464118022 container died 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2b32793ff082852847640c64f56af017f4479d28d0d0ae608090069f9f77cb7e-merged.mount: Deactivated successfully.
Oct 12 16:56:37 np0005481680 podman[84873]: 2025-10-12 20:56:37.817462053 +0000 UTC m=+0.509230048 container remove 5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:56:37 np0005481680 systemd[1]: libpod-conmon-5d44b13efa9c2bf324f0761f8524ab017fb0f7ccec3b2bd708ef3256227dd051.scope: Deactivated successfully.
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:37 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1754530205' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1754530205' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Oct 12 16:56:38 np0005481680 gifted_agnesi[84857]: pool 'cephfs.cephfs.meta' created
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.fmjeht(active, since 2m), standbys: compute-2.iamnla
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:38 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 5df8f4e9-2e5f-490f-92c2-e81e567b400f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"} v 0)
Oct 12 16:56:38 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"}]: dispatch
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:38 np0005481680 systemd[1]: libpod-66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774.scope: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[84830]: 2025-10-12 20:56:38.050219401 +0000 UTC m=+0.983288757 container died 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 16:56:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b8bed080dca610e91d3adb1f9ce217dfc0ea85bab7b4ca9407df257c2bed9562-merged.mount: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[84830]: 2025-10-12 20:56:38.100284926 +0000 UTC m=+1.033354282 container remove 66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774 (image=quay.io/ceph/ceph:v19, name=gifted_agnesi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:56:38 np0005481680 systemd[1]: libpod-conmon-66d0ee9d841497a80b2b7d110b113cd84ef22a517ce2926eda13a7a9cf4fd774.scope: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.435145537 +0000 UTC m=+0.045725243 container create f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:56:38 np0005481680 systemd[1]: Started libpod-conmon-f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c.scope.
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v71: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:38 np0005481680 python3[85043]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.414812677 +0000 UTC m=+0.025392413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.521432857 +0000 UTC m=+0.132012583 container init f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.528544662 +0000 UTC m=+0.139124368 container start f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.532157117 +0000 UTC m=+0.142736833 container attach f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Oct 12 16:56:38 np0005481680 vigilant_hoover[85073]: 167 167
Oct 12 16:56:38 np0005481680 systemd[1]: libpod-f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c.scope: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.536213331 +0000 UTC m=+0.146793037 container died f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:56:38 np0005481680 podman[85076]: 2025-10-12 20:56:38.557407875 +0000 UTC m=+0.043111155 container create 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:56:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6b6dd76963013a79fb91ef8cddd573964be9b0e7c63866626b8cc339020b58bd-merged.mount: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[85056]: 2025-10-12 20:56:38.588879365 +0000 UTC m=+0.199459101 container remove f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:56:38 np0005481680 systemd[1]: Started libpod-conmon-48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5.scope.
Oct 12 16:56:38 np0005481680 systemd[1]: libpod-conmon-f969dce84f040c2bc77332b52b1f315e3ba819ae9e85144c74eb3b465302c51c.scope: Deactivated successfully.
Oct 12 16:56:38 np0005481680 podman[85076]: 2025-10-12 20:56:38.538924262 +0000 UTC m=+0.024627552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f12602c2d4b8156f0d7e12e6548c4acaf93d67edd62a9bd23c2453547bfd74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f12602c2d4b8156f0d7e12e6548c4acaf93d67edd62a9bd23c2453547bfd74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 podman[85076]: 2025-10-12 20:56:38.658351096 +0000 UTC m=+0.144054366 container init 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh started
Oct 12 16:56:38 np0005481680 podman[85076]: 2025-10-12 20:56:38.664343932 +0000 UTC m=+0.150047232 container start 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:38 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from mgr.compute-1.orllvh 192.168.122.101:0/3262943275; not ready for session (expect reconnect)
Oct 12 16:56:38 np0005481680 podman[85076]: 2025-10-12 20:56:38.668385638 +0000 UTC m=+0.154088928 container attach 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:38 np0005481680 podman[85117]: 2025-10-12 20:56:38.752289346 +0000 UTC m=+0.037617182 container create 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:56:38 np0005481680 systemd[1]: Started libpod-conmon-16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9.scope.
Oct 12 16:56:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:38 np0005481680 podman[85117]: 2025-10-12 20:56:38.735661142 +0000 UTC m=+0.020989068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27217dd6d35fd5e5c9b256c97692a9970d1357cd3f63f033233852b48a6173e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27217dd6d35fd5e5c9b256c97692a9970d1357cd3f63f033233852b48a6173e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27217dd6d35fd5e5c9b256c97692a9970d1357cd3f63f033233852b48a6173e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27217dd6d35fd5e5c9b256c97692a9970d1357cd3f63f033233852b48a6173e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:38 np0005481680 podman[85117]: 2025-10-12 20:56:38.851949694 +0000 UTC m=+0.137277580 container init 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:38 np0005481680 podman[85117]: 2025-10-12 20:56:38.864178813 +0000 UTC m=+0.149506689 container start 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:38 np0005481680 podman[85117]: 2025-10-12 20:56:38.868571967 +0000 UTC m=+0.153899903 container attach 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Oct 12 16:56:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:39 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 2b0a8025-637b-4e99-a09e-10c1bc8561b7 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:39 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1754530205' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.fmjeht(active, since 2m), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"} v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"}]: dispatch
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432148467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:39 np0005481680 lvm[85230]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:56:39 np0005481680 lvm[85230]: VG ceph_vg0 finished
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:39 np0005481680 friendly_pike[85136]: {}
Oct 12 16:56:39 np0005481680 systemd[1]: libpod-16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9.scope: Deactivated successfully.
Oct 12 16:56:39 np0005481680 podman[85117]: 2025-10-12 20:56:39.714703897 +0000 UTC m=+1.000031773 container died 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 16:56:39 np0005481680 systemd[1]: libpod-16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9.scope: Consumed 1.287s CPU time.
Oct 12 16:56:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c27217dd6d35fd5e5c9b256c97692a9970d1357cd3f63f033233852b48a6173e-merged.mount: Deactivated successfully.
Oct 12 16:56:39 np0005481680 podman[85117]: 2025-10-12 20:56:39.766522508 +0000 UTC m=+1.051850354 container remove 16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:56:39 np0005481680 systemd[1]: libpod-conmon-16b907b2c556f7f6fa72be88258de5dfee9f8f8f2c38cfcd5c4c79f8a2be79d9.scope: Deactivated successfully.
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:56:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432148467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Oct 12 16:56:40 np0005481680 angry_wiles[85107]: pool 'cephfs.cephfs.data' created
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:40 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:40 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 0ff31d4d-1d11-4564-a41b-45a36a189dc6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:40 np0005481680 systemd[1]: libpod-48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5.scope: Deactivated successfully.
Oct 12 16:56:40 np0005481680 podman[85076]: 2025-10-12 20:56:40.054219149 +0000 UTC m=+1.539922469 container died 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/432148467' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/432148467' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:56:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d6f12602c2d4b8156f0d7e12e6548c4acaf93d67edd62a9bd23c2453547bfd74-merged.mount: Deactivated successfully.
Oct 12 16:56:40 np0005481680 podman[85076]: 2025-10-12 20:56:40.110396064 +0000 UTC m=+1.596099364 container remove 48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5 (image=quay.io/ceph/ceph:v19, name=angry_wiles, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:56:40 np0005481680 systemd[1]: libpod-conmon-48eb06b3f64e67f04bd3efaaac0f4d708ffcdc42fdc94a7934a0d553b6364df5.scope: Deactivated successfully.
Oct 12 16:56:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v74: 38 pgs: 32 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:56:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:40 np0005481680 python3[85285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:40 np0005481680 podman[85286]: 2025-10-12 20:56:40.610777779 +0000 UTC m=+0.044670696 container create f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:40 np0005481680 systemd[1]: Started libpod-conmon-f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d.scope.
Oct 12 16:56:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd46f9170d5b604036dc49253a3e5bc391f5fb454de7a2c6bd3b4ed70972e79c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd46f9170d5b604036dc49253a3e5bc391f5fb454de7a2c6bd3b4ed70972e79c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:40 np0005481680 podman[85286]: 2025-10-12 20:56:40.592834211 +0000 UTC m=+0.026727108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:40 np0005481680 podman[85286]: 2025-10-12 20:56:40.704363809 +0000 UTC m=+0.138256736 container init f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:40 np0005481680 podman[85286]: 2025-10-12 20:56:40.710719705 +0000 UTC m=+0.144612582 container start f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:40 np0005481680 podman[85286]: 2025-10-12 20:56:40.713513627 +0000 UTC m=+0.147406574 container attach f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 16:56:40 np0005481680 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev a7c9e1e3-88e6-4f79-9b53-843e8e2b017a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 5df8f4e9-2e5f-490f-92c2-e81e567b400f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 5df8f4e9-2e5f-490f-92c2-e81e567b400f (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 2b0a8025-637b-4e99-a09e-10c1bc8561b7 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 2b0a8025-637b-4e99-a09e-10c1bc8561b7 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Oct 12 16:56:41 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 20 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20 pruub=9.899651527s) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active pruub 58.098892212s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:41 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=10.920681000s) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active pruub 59.119991302s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:41 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 20 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20 pruub=9.899651527s) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown pruub 58.098892212s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:41 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=10.920681000s) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown pruub 59.119991302s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 0ff31d4d-1d11-4564-a41b-45a36a189dc6 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 0ff31d4d-1d11-4564-a41b-45a36a189dc6 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev a7c9e1e3-88e6-4f79-9b53-843e8e2b017a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event a7c9e1e3-88e6-4f79-9b53-843e8e2b017a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952427307' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct 12 16:56:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952427307' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Oct 12 16:56:42 np0005481680 hungry_chatelet[85300]: enabled application 'rbd' on pool 'vms'
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:42 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.19( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.18( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.17( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.10( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.16( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.11( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.14( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.13( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.13( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.14( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.12( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.15( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.11( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.16( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.10( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.17( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.f( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.8( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.e( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.9( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.15( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.d( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.c( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.12( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.b( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.a( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.7( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.7( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.6( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.5( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.2( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.6( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.2( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.5( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.3( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.4( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.4( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.3( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.8( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.9( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1a( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1b( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1c( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1d( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.19( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1e( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1f( empty local-lis/les=13/14 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.18( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1952427307' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1952427307' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.18( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.10( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.17( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.11( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.12( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.16( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.17( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.12( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.7( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.7( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.19( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.2( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.4( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 systemd[1]: libpod-f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d.scope: Deactivated successfully.
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.1a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.1f( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.6( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [0] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 21 pg[4.4( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [0] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:42 np0005481680 podman[85286]: 2025-10-12 20:56:42.081800921 +0000 UTC m=+1.515693828 container died f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 12 16:56:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cd46f9170d5b604036dc49253a3e5bc391f5fb454de7a2c6bd3b4ed70972e79c-merged.mount: Deactivated successfully.
Oct 12 16:56:42 np0005481680 podman[85286]: 2025-10-12 20:56:42.13508949 +0000 UTC m=+1.568982397 container remove f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d (image=quay.io/ceph/ceph:v19, name=hungry_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 16:56:42 np0005481680 systemd[1]: libpod-conmon-f5c8ed5843df9e814ca1a17f87cee5a38b3a3c1ada73edc393532721a2ceac3d.scope: Deactivated successfully.
Oct 12 16:56:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v77: 100 pgs: 94 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:56:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:42 np0005481680 python3[85362]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:42 np0005481680 podman[85363]: 2025-10-12 20:56:42.712611367 +0000 UTC m=+0.054620005 container create b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:42 np0005481680 systemd[1]: Started libpod-conmon-b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c.scope.
Oct 12 16:56:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edcc39e657820c8a761ecbc87ec14aa4ed477d984429dff0c9e0a3f0beb0618/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edcc39e657820c8a761ecbc87ec14aa4ed477d984429dff0c9e0a3f0beb0618/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:42 np0005481680 podman[85363]: 2025-10-12 20:56:42.69239153 +0000 UTC m=+0.034400248 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:42 np0005481680 podman[85363]: 2025-10-12 20:56:42.811619088 +0000 UTC m=+0.153627816 container init b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:42 np0005481680 podman[85363]: 2025-10-12 20:56:42.818291192 +0000 UTC m=+0.160299870 container start b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 16:56:42 np0005481680 podman[85363]: 2025-10-12 20:56:42.822356979 +0000 UTC m=+0.164365677 container attach b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 16:56:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 12 16:56:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:43 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: Deploying daemon osd.2 on compute-2
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct 12 16:56:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2714023565' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2714023565' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 22 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.933655739s) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 61.203102112s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 22 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.933655739s) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown pruub 61.203102112s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2714023565' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Oct 12 16:56:44 np0005481680 flamboyant_beaver[85379]: enabled application 'rbd' on pool 'volumes'
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:44 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1f( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1e( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.11( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.12( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.15( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.10( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.14( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.17( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.16( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.9( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.8( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.b( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.13( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.c( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.6( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.d( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.3( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.7( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.4( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.5( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.2( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.a( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.e( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.f( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1c( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1d( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1a( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.1b( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.19( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 23 pg[5.18( empty local-lis/les=16/17 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:44 np0005481680 systemd[1]: libpod-b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c.scope: Deactivated successfully.
Oct 12 16:56:44 np0005481680 podman[85363]: 2025-10-12 20:56:44.15756895 +0000 UTC m=+1.499577618 container died b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 16:56:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8edcc39e657820c8a761ecbc87ec14aa4ed477d984429dff0c9e0a3f0beb0618-merged.mount: Deactivated successfully.
Oct 12 16:56:44 np0005481680 podman[85363]: 2025-10-12 20:56:44.214255048 +0000 UTC m=+1.556263716 container remove b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c (image=quay.io/ceph/ceph:v19, name=flamboyant_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 12 16:56:44 np0005481680 systemd[1]: libpod-conmon-b4225f6bff47328a0ac0602914f6c9525c0d50295ef22164fa41c44a1f4d3c6c.scope: Deactivated successfully.
Oct 12 16:56:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v80: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:44 np0005481680 python3[85439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:44 np0005481680 podman[85440]: 2025-10-12 20:56:44.689027656 +0000 UTC m=+0.056587907 container create 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 16:56:44 np0005481680 systemd[1]: Started libpod-conmon-16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe.scope.
Oct 12 16:56:44 np0005481680 podman[85440]: 2025-10-12 20:56:44.667518375 +0000 UTC m=+0.035078606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a77bc3729a1353c31934fcea940f4018b0d340808f8cb30130056591ce81df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a77bc3729a1353c31934fcea940f4018b0d340808f8cb30130056591ce81df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:44 np0005481680 podman[85440]: 2025-10-12 20:56:44.785701446 +0000 UTC m=+0.153261687 container init 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 12 16:56:44 np0005481680 podman[85440]: 2025-10-12 20:56:44.795595484 +0000 UTC m=+0.163155745 container start 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 16:56:44 np0005481680 podman[85440]: 2025-10-12 20:56:44.799722472 +0000 UTC m=+0.167282733 container attach 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 16:56:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933455333' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:45 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.11( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.10( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.13( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.14( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.15( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.17( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.9( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.12( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2714023565' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.c( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.8( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.7( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.3( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.6( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.4( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.5( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.2( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1c( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.16( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.18( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.1b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.0( empty local-lis/les=22/24 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 24 pg[5.19( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=16/16 les/c/f=17/17/0 sis=22) [0] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:45 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 9 completed events
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:56:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 12 16:56:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2933455333' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct 12 16:56:46 np0005481680 practical_bardeen[85455]: enabled application 'rbd' on pool 'backups'
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:46 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2933455333' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2933455333' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 12 16:56:46 np0005481680 systemd[1]: libpod-16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe.scope: Deactivated successfully.
Oct 12 16:56:46 np0005481680 podman[85440]: 2025-10-12 20:56:46.282663595 +0000 UTC m=+1.650223806 container died 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 16:56:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-99a77bc3729a1353c31934fcea940f4018b0d340808f8cb30130056591ce81df-merged.mount: Deactivated successfully.
Oct 12 16:56:46 np0005481680 systemd[74946]: Starting Mark boot as successful...
Oct 12 16:56:46 np0005481680 systemd[74946]: Finished Mark boot as successful.
Oct 12 16:56:46 np0005481680 podman[85440]: 2025-10-12 20:56:46.325618865 +0000 UTC m=+1.693179066 container remove 16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe (image=quay.io/ceph/ceph:v19, name=practical_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:46 np0005481680 systemd[1]: libpod-conmon-16128526b82f313db5a0b910864743dd08200722e3b0e7de393a96ac75b20cbe.scope: Deactivated successfully.
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v83: 131 pgs: 32 peering, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:46 np0005481680 python3[85518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:46 np0005481680 podman[85519]: 2025-10-12 20:56:46.786540271 +0000 UTC m=+0.047973931 container create d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:56:46 np0005481680 systemd[1]: Started libpod-conmon-d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9.scope.
Oct 12 16:56:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d95d05ae54e44da69b45d78997440564a05801542f50f120547b71fe038e8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d95d05ae54e44da69b45d78997440564a05801542f50f120547b71fe038e8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:46 np0005481680 podman[85519]: 2025-10-12 20:56:46.764757713 +0000 UTC m=+0.026191383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:46 np0005481680 podman[85519]: 2025-10-12 20:56:46.866612319 +0000 UTC m=+0.128046009 container init d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:46 np0005481680 podman[85519]: 2025-10-12 20:56:46.873450047 +0000 UTC m=+0.134883697 container start d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 16:56:46 np0005481680 podman[85519]: 2025-10-12 20:56:46.876499357 +0000 UTC m=+0.137933017 container attach d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 16:56:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct 12 16:56:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1456208618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1456208618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1456208618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct 12 16:56:47 np0005481680 priceless_darwin[85534]: enabled application 'rbd' on pool 'images'
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:47 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:47 np0005481680 systemd[1]: libpod-d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9.scope: Deactivated successfully.
Oct 12 16:56:47 np0005481680 podman[85519]: 2025-10-12 20:56:47.400692544 +0000 UTC m=+0.662126194 container died d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-97d95d05ae54e44da69b45d78997440564a05801542f50f120547b71fe038e8a-merged.mount: Deactivated successfully.
Oct 12 16:56:47 np0005481680 podman[85519]: 2025-10-12 20:56:47.441281883 +0000 UTC m=+0.702715533 container remove d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9 (image=quay.io/ceph/ceph:v19, name=priceless_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:47 np0005481680 systemd[1]: libpod-conmon-d72e53b51f3d33a477b982a2e31b68bbed6f6f9c1efd01e9d94821cbf311f5c9.scope: Deactivated successfully.
Oct 12 16:56:47 np0005481680 python3[85597]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:47 np0005481680 podman[85598]: 2025-10-12 20:56:47.811787031 +0000 UTC m=+0.050619400 container create 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Oct 12 16:56:47 np0005481680 systemd[1]: Started libpod-conmon-300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f.scope.
Oct 12 16:56:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edaa0af517a754c5c2524ba2c4c9632c92b5005af083816983e4ed25700373c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edaa0af517a754c5c2524ba2c4c9632c92b5005af083816983e4ed25700373c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:47 np0005481680 podman[85598]: 2025-10-12 20:56:47.791234185 +0000 UTC m=+0.030066574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:47 np0005481680 podman[85598]: 2025-10-12 20:56:47.891607142 +0000 UTC m=+0.130439521 container init 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:47 np0005481680 podman[85598]: 2025-10-12 20:56:47.898482182 +0000 UTC m=+0.137314571 container start 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:56:47 np0005481680 podman[85598]: 2025-10-12 20:56:47.901984184 +0000 UTC m=+0.140816553 container attach 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:56:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 12 16:56:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2134341110' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1456208618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2134341110' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2134341110' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct 12 16:56:48 np0005481680 magical_darwin[85614]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:48 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:48 np0005481680 systemd[1]: libpod-300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f.scope: Deactivated successfully.
Oct 12 16:56:48 np0005481680 podman[85598]: 2025-10-12 20:56:48.40279728 +0000 UTC m=+0.641629689 container died 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-edaa0af517a754c5c2524ba2c4c9632c92b5005af083816983e4ed25700373c5-merged.mount: Deactivated successfully.
Oct 12 16:56:48 np0005481680 podman[85598]: 2025-10-12 20:56:48.447903706 +0000 UTC m=+0.686736065 container remove 300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f (image=quay.io/ceph/ceph:v19, name=magical_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:48 np0005481680 systemd[1]: libpod-conmon-300cff32f841576b643bb6fb27aa823d48bafdaf5bf64ecbcf04219b00fe175f.scope: Deactivated successfully.
Oct 12 16:56:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v86: 131 pgs: 32 peering, 99 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:48 np0005481680 python3[85701]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:48 np0005481680 podman[85702]: 2025-10-12 20:56:48.864281312 +0000 UTC m=+0.067484490 container create fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:56:48 np0005481680 systemd[1]: Started libpod-conmon-fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556.scope.
Oct 12 16:56:48 np0005481680 podman[85702]: 2025-10-12 20:56:48.834823934 +0000 UTC m=+0.038027122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527ae66ee05c7c33c260440b04b48399c5710c3fac6e9fa4291b7146b14108e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527ae66ee05c7c33c260440b04b48399c5710c3fac6e9fa4291b7146b14108e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:48 np0005481680 podman[85702]: 2025-10-12 20:56:48.966658281 +0000 UTC m=+0.169861429 container init fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 16:56:48 np0005481680 podman[85702]: 2025-10-12 20:56:48.972146654 +0000 UTC m=+0.175349782 container start fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:48 np0005481680 podman[85702]: 2025-10-12 20:56:48.976129438 +0000 UTC m=+0.179332576 container attach fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 16:56:49 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 12 16:56:49 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220510669' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2134341110' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='osd.2 [v2:192.168.122.102:6800/3968602224,v1:192.168.122.102:6801/3968602224]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3220510669' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220510669' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct 12 16:56:49 np0005481680 admiring_dewdney[85730]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e28 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct 12 16:56:49 np0005481680 podman[85702]: 2025-10-12 20:56:49.595865309 +0000 UTC m=+0.799068487 container died fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:49 np0005481680 systemd[1]: libpod-fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556.scope: Deactivated successfully.
Oct 12 16:56:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-527ae66ee05c7c33c260440b04b48399c5710c3fac6e9fa4291b7146b14108e8-merged.mount: Deactivated successfully.
Oct 12 16:56:49 np0005481680 podman[85702]: 2025-10-12 20:56:49.635230297 +0000 UTC m=+0.838433475 container remove fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556 (image=quay.io/ceph/ceph:v19, name=admiring_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:56:49 np0005481680 systemd[1]: libpod-conmon-fcbf9803dbff011dd75c4e8d47db7541adcc3b6f151659a2f259a73bf1676556.scope: Deactivated successfully.
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3220510669' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='osd.2 [v2:192.168.122.102:6800/3968602224,v1:192.168.122.102:6801/3968602224]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.467354774s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.224708557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.467354774s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.224708557s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.11( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662345886s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419845581s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.10( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662336349s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419868469s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473556519s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231109619s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.11( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662290573s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419845581s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.10( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662303925s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419868469s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473517418s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231109619s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.13( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662152290s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419883728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.13( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662152290s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419883728s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.474275589s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232078552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.474275589s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232078552s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473474503s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231384277s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.12( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662649155s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.420570374s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473402023s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231330872s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661982536s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419921875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473453522s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231384277s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.12( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.662649155s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.420570374s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661951065s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419921875s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473368645s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231330872s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473392487s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231529236s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661676407s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419815063s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473392487s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231529236s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473552704s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231864929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473207474s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231536865s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473552704s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231864929s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.16( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.666516304s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424911499s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.16( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.666501999s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424911499s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473207474s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231536865s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472983360s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231407166s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472930908s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231407166s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473049164s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231567383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473031998s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231567383s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.9( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661314011s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.419960022s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473247528s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231895447s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473218918s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231895447s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473303795s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.231987000s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.9( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661276817s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419960022s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.8( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665917397s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424674988s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473303795s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231987000s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.475193024s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.234001160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.8( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665917397s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424674988s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.475193024s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.234001160s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473080635s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232017517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473237038s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232231140s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473080635s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232017517s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665525436s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424522400s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473209381s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232231140s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473448753s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232482910s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665525436s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424522400s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473413467s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232482910s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473755836s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232917786s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473713875s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232917786s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665258408s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424575806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.661653519s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419815063s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665258408s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424575806s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473197937s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232612610s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473183632s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232612610s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473154068s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232620239s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473120689s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232620239s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665081978s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424652100s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665064812s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424652100s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473015785s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232643127s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=22/24 n=0 ec=16/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665316582s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424995422s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473015785s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232643127s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=22/24 n=0 ec=16/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.665316582s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424995422s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473183632s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232971191s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473148346s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232933044s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473171234s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232971191s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473206520s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233039856s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473148346s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232933044s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473206520s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233039856s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473131180s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233085632s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664731979s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424697876s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473131180s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233085632s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664698601s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424697876s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472548485s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.232589722s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473094940s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233146667s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473067284s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233146667s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472518921s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232589722s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473021507s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233245850s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664505005s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424774170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664487839s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424774170s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472946167s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233245850s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472890854s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233306885s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473173141s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233612061s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664447784s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424896240s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473173141s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233612061s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472890854s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233306885s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664447784s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664258957s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424880981s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.f( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664242744s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424880981s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473038673s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233726501s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472993851s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233734131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1c( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664069176s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424819946s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.473009109s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233726501s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472993851s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233734131s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472953796s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233726501s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1c( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.664043427s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424819946s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472953796s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233726501s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472889900s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233741760s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472889900s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233741760s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663949966s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424942017s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472824097s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233818054s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663949966s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424942017s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472792625s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233825684s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472824097s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233818054s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472854614s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233947754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472774506s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233856201s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472792625s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233825684s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472844124s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233947754s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472737312s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233856201s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472710609s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233955383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472693443s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233955383s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663676262s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424987793s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472608566s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233955383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.1b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663651466s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424987793s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472586632s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233970642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472586632s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233970642s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472608566s) [] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233955383s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.18( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663526535s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424972534s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.18( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663492203s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424972534s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472344398s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 73.233978271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.472309113s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233978271s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663071632s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active pruub 68.424751282s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=29 pruub=10.663071632s) [] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424751282s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:50 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3968602224; not ready for session (expect reconnect)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:50 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:56:50 np0005481680 python3[85910]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:56:50 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 7be8d537-64e0-4072-be02-ad985485833d (Global Recovery Event) in 10 seconds
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 12 16:56:51 np0005481680 python3[85981]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302610.4461184-33764-16129331217773/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct 12 16:56:51 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3968602224; not ready for session (expect reconnect)
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:51 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=18/12 lis/c=18/18 les/c/f=20/20/0 sis=29) [0] r=0 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 12 16:56:51 np0005481680 ceph-mon[73608]: Cluster is now healthy
Oct 12 16:56:51 np0005481680 python3[86083]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:56:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 12 16:56:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 12 16:56:52 np0005481680 python3[86158]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302611.5369143-33778-68091560862833/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=3fda16df56e97f931d9b65fbe8ccba92bd64a917 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3968602224; not ready for session (expect reconnect)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:56:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:52 np0005481680 python3[86208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:52 np0005481680 podman[86257]: 2025-10-12 20:56:52.98421118 +0000 UTC m=+0.068773682 container create 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:53 np0005481680 systemd[1]: Started libpod-conmon-14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2.scope.
Oct 12 16:56:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:52.95610326 +0000 UTC m=+0.040665852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 12 16:56:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c58d61ea308a6dab5890532fdc8c3262fce90a116075f3e4e9b38fa3d70615d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c58d61ea308a6dab5890532fdc8c3262fce90a116075f3e4e9b38fa3d70615d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c58d61ea308a6dab5890532fdc8c3262fce90a116075f3e4e9b38fa3d70615d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:53.09859211 +0000 UTC m=+0.183154692 container init 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:53.109329236 +0000 UTC m=+0.193891738 container start 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:53.112934257 +0000 UTC m=+0.197496829 container attach 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523827953' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523827953' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 12 16:56:53 np0005481680 exciting_dewdney[86297]: 
Oct 12 16:56:53 np0005481680 exciting_dewdney[86297]: [global]
Oct 12 16:56:53 np0005481680 exciting_dewdney[86297]: #011fsid = 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 16:56:53 np0005481680 exciting_dewdney[86297]: #011mon_host = 192.168.122.100
Oct 12 16:56:53 np0005481680 systemd[1]: libpod-14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2.scope: Deactivated successfully.
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:53.549758476 +0000 UTC m=+0.634321008 container died 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0c58d61ea308a6dab5890532fdc8c3262fce90a116075f3e4e9b38fa3d70615d-merged.mount: Deactivated successfully.
Oct 12 16:56:53 np0005481680 podman[86257]: 2025-10-12 20:56:53.597752507 +0000 UTC m=+0.682314999 container remove 14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2 (image=quay.io/ceph/ceph:v19, name=exciting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 systemd[1]: libpod-conmon-14fccb56428433b6d7dc18d87df9d2bb81cfb3b4ecaa77a2c60dfeb05748eda2.scope: Deactivated successfully.
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3968602224; not ready for session (expect reconnect)
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:53 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1523827953' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:53 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1523827953' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 12 16:56:54 np0005481680 python3[86611]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.073618955 +0000 UTC m=+0.044315856 container create d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:56:54 np0005481680 systemd[1]: Started libpod-conmon-d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4.scope.
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.053737976 +0000 UTC m=+0.024434917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37459300651dcb83f3b09f5d392184f756192856f9eb173d313182087e683b61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37459300651dcb83f3b09f5d392184f756192856f9eb173d313182087e683b61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37459300651dcb83f3b09f5d392184f756192856f9eb173d313182087e683b61/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.184021983 +0000 UTC m=+0.154718934 container init d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.190139 +0000 UTC m=+0.160835941 container start d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.19444212 +0000 UTC m=+0.165139051 container attach d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 12 16:56:54 np0005481680 ceph-mgr[73901]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3968602224; not ready for session (expect reconnect)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct 12 16:56:54 np0005481680 ceph-mgr[73901]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1753409706' entity='client.admin' 
Oct 12 16:56:54 np0005481680 amazing_cohen[86724]: set ssl_option
Oct 12 16:56:54 np0005481680 systemd[1]: libpod-d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4.scope: Deactivated successfully.
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.645627697 +0000 UTC m=+0.616324638 container died d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 16:56:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-37459300651dcb83f3b09f5d392184f756192856f9eb173d313182087e683b61-merged.mount: Deactivated successfully.
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: OSD bench result of 8360.685292 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1753409706' entity='client.admin' 
Oct 12 16:56:54 np0005481680 podman[86669]: 2025-10-12 20:56:54.697794783 +0000 UTC m=+0.668491724 container remove d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4 (image=quay.io/ceph/ceph:v19, name=amazing_cohen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:56:54 np0005481680 systemd[1]: libpod-conmon-d1dcb4eff142e8847ad6a588abf1bbb0255bb52fc4df6cffff1fe404d07bd1c4.scope: Deactivated successfully.
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/3968602224,v1:192.168.122.102:6801/3968602224] boot
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:56:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.301783562s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.224708557s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.301739693s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.224708557s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.496817112s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419883728s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.496774673s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.419883728s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.497285843s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.420570374s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.497267246s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.420570374s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308724403s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232078552s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308170319s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231529236s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308699608s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232078552s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308139801s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231529236s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.307924271s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231536865s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.310359001s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.234001160s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308348656s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231987000s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.310340881s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.234001160s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500994682s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424674988s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308293343s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231987000s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.307846069s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231536865s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500965595s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424674988s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308218002s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232017517s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500710964s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424522400s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308200836s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232017517s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500692844s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424522400s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500609398s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424575806s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500591755s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424575806s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.307781219s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231864929s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308626175s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232643127s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308529854s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232643127s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.307754517s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.231864929s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308703423s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232933044s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308689117s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.232933044s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308721542s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233039856s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308698654s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233039856s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308714867s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233085632s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308697701s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233085632s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500330925s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424751282s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500306606s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424751282s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=22/24 n=0 ec=16/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500605583s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424995422s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308744431s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233306885s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308724403s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233306885s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=22/24 n=0 ec=16/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500497341s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424995422s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500224113s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424896240s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308929443s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233612061s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308912277s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233612061s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308995247s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233726501s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500200748s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308958054s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233741760s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308976173s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233726501s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308946609s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233741760s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308924675s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233818054s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308841705s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233734131s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308897972s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233825684s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.500005722s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424942017s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308877945s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233825684s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=22/24 n=0 ec=22/16 lis/c=22/22 les/c/f=24/24/0 sis=31 pruub=6.499991894s) [2] r=-1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.424942017s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308765411s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233734131s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308854103s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233955383s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308834076s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233970642s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=20/21 n=0 ec=20/13 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308832169s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233955383s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308889389s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233818054s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=20/21 n=0 ec=20/14 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=11.308818817s) [2] r=-1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.233970642s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.051432852 +0000 UTC m=+0.039396280 container create 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct 12 16:56:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct 12 16:56:55 np0005481680 systemd[1]: Started libpod-conmon-2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73.scope.
Oct 12 16:56:55 np0005481680 python3[86887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.033151123 +0000 UTC m=+0.021114571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.149412751 +0000 UTC m=+0.137376239 container init 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.159449168 +0000 UTC m=+0.147412596 container start 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.161355117 +0000 UTC m=+0.040987281 container create 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:56:55 np0005481680 priceless_lovelace[86921]: 167 167
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.170559333 +0000 UTC m=+0.158522771 container attach 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.170925922 +0000 UTC m=+0.158889350 container died 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-57a39eba9346dffe031968f6880ba74452911a21fa1763cf6bea7fbf16dbfdc9-merged.mount: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86905]: 2025-10-12 20:56:55.208346521 +0000 UTC m=+0.196309949 container remove 2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lovelace, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:56:55 np0005481680 systemd[1]: Started libpod-conmon-34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046.scope.
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-conmon-2ece7523324a3d01145f958f66037872b1c8a28ae16ab9c99be4e40ebb20ae73.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.145420819 +0000 UTC m=+0.025053013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c035c722fbe3f35c0db53d33b748082cdba44f1e58626425c8d189554c649a83/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c035c722fbe3f35c0db53d33b748082cdba44f1e58626425c8d189554c649a83/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c035c722fbe3f35c0db53d33b748082cdba44f1e58626425c8d189554c649a83/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.279902604 +0000 UTC m=+0.159534788 container init 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.28639689 +0000 UTC m=+0.166029064 container start 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.290002172 +0000 UTC m=+0.169634346 container attach 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.411218627 +0000 UTC m=+0.052469445 container create 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:55 np0005481680 systemd[1]: Started libpod-conmon-32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585.scope.
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.391260316 +0000 UTC m=+0.032511144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.510587063 +0000 UTC m=+0.151837901 container init 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.520956698 +0000 UTC m=+0.162207516 container start 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.525087824 +0000 UTC m=+0.166338662 container attach 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:55 np0005481680 great_pascal[86956]: Scheduled rgw.rgw update...
Oct 12 16:56:55 np0005481680 great_pascal[86956]: Scheduled ingress.rgw.default update...
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.684925198 +0000 UTC m=+0.564557362 container died 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c035c722fbe3f35c0db53d33b748082cdba44f1e58626425c8d189554c649a83-merged.mount: Deactivated successfully.
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 12 16:56:55 np0005481680 podman[86923]: 2025-10-12 20:56:55.724133312 +0000 UTC m=+0.603765476 container remove 34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046 (image=quay.io/ceph/ceph:v19, name=great_pascal, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-conmon-34882f9d4ddc746b5139aa3926950c7533b0eb0336cd4a961ba9a1ca565d4046.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: osd.2 [v2:192.168.122.102:6800/3968602224,v1:192.168.122.102:6801/3968602224] boot
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:55 np0005481680 nifty_murdock[87001]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:56:55 np0005481680 nifty_murdock[87001]: --> All data devices are unavailable
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.834818767 +0000 UTC m=+0.476069595 container died 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3e26918a3e5d3f18aaedfbe060cf0548a938ecb49e358136d597d670d384f016-merged.mount: Deactivated successfully.
Oct 12 16:56:55 np0005481680 podman[86966]: 2025-10-12 20:56:55.891484799 +0000 UTC m=+0.532735627 container remove 32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:55 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 10 completed events
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:56:55 np0005481680 systemd[1]: libpod-conmon-32b38f4a41429bec6ab5de035fa6f589e6be286c79305f97cad51bef89575585.scope: Deactivated successfully.
Oct 12 16:56:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:56 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 12 16:56:56 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 12 16:56:56 np0005481680 python3[87168]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:56:56 np0005481680 podman[87251]: 2025-10-12 20:56:56.452007186 +0000 UTC m=+0.065823687 container create ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:56 np0005481680 systemd[1]: Started libpod-conmon-ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb.scope.
Oct 12 16:56:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:56:56 np0005481680 podman[87251]: 2025-10-12 20:56:56.424845401 +0000 UTC m=+0.038661952 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:56 np0005481680 podman[87251]: 2025-10-12 20:56:56.536610644 +0000 UTC m=+0.150427205 container init ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:56 np0005481680 podman[87251]: 2025-10-12 20:56:56.542668319 +0000 UTC m=+0.156484780 container start ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:56 np0005481680 podman[87251]: 2025-10-12 20:56:56.546362704 +0000 UTC m=+0.160179265 container attach ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:56 np0005481680 brave_austin[87295]: 167 167
Oct 12 16:56:56 np0005481680 systemd[1]: libpod-ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb.scope: Deactivated successfully.
Oct 12 16:56:56 np0005481680 python3[87291]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302615.944917-33797-15325708743882/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:56:56 np0005481680 podman[87300]: 2025-10-12 20:56:56.604956924 +0000 UTC m=+0.038007554 container died ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e9f7518778c4e5d9a3bca967dbe9d78b24c3e34b188242b8ad9b5c8557a558ed-merged.mount: Deactivated successfully.
Oct 12 16:56:56 np0005481680 podman[87300]: 2025-10-12 20:56:56.647286249 +0000 UTC m=+0.080336779 container remove ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:56 np0005481680 systemd[1]: libpod-conmon-ebb51151f750cf2f95ab1a21b055e9b051e5be12a559d4b0d39d1462e585bedb.scope: Deactivated successfully.
Oct 12 16:56:56 np0005481680 ceph-mon[73608]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:56:56 np0005481680 ceph-mon[73608]: Saving service ingress.rgw.default spec with placement count:2
Oct 12 16:56:56 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:56 np0005481680 podman[87346]: 2025-10-12 20:56:56.824143049 +0000 UTC m=+0.043635269 container create bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:56:56 np0005481680 systemd[1]: Started libpod-conmon-bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6.scope.
Oct 12 16:56:56 np0005481680 podman[87346]: 2025-10-12 20:56:56.804789433 +0000 UTC m=+0.024281653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25db1edd3d9ea49140ca7376420166786b8b86bdf7e0393333e48d252aeba3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25db1edd3d9ea49140ca7376420166786b8b86bdf7e0393333e48d252aeba3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25db1edd3d9ea49140ca7376420166786b8b86bdf7e0393333e48d252aeba3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25db1edd3d9ea49140ca7376420166786b8b86bdf7e0393333e48d252aeba3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:56 np0005481680 podman[87346]: 2025-10-12 20:56:56.934297551 +0000 UTC m=+0.153789781 container init bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:56 np0005481680 podman[87346]: 2025-10-12 20:56:56.949192612 +0000 UTC m=+0.168684862 container start bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:56 np0005481680 podman[87346]: 2025-10-12 20:56:56.953137543 +0000 UTC m=+0.172629773 container attach bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Oct 12 16:56:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Oct 12 16:56:57 np0005481680 python3[87392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]: {
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:    "0": [
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:        {
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "devices": [
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "/dev/loop3"
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            ],
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "lv_name": "ceph_lv0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "lv_size": "21470642176",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "name": "ceph_lv0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "tags": {
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.cluster_name": "ceph",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.crush_device_class": "",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.encrypted": "0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.osd_id": "0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.type": "block",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.vdo": "0",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:                "ceph.with_tpm": "0"
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            },
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "type": "block",
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:            "vg_name": "ceph_vg0"
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:        }
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]:    ]
Oct 12 16:56:57 np0005481680 upbeat_mclean[87362]: }
Oct 12 16:56:57 np0005481680 systemd[1]: libpod-bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6.scope: Deactivated successfully.
Oct 12 16:56:57 np0005481680 podman[87346]: 2025-10-12 20:56:57.316540272 +0000 UTC m=+0.536032512 container died bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.350256945 +0000 UTC m=+0.088882438 container create d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-25db1edd3d9ea49140ca7376420166786b8b86bdf7e0393333e48d252aeba3c9-merged.mount: Deactivated successfully.
Oct 12 16:56:57 np0005481680 podman[87346]: 2025-10-12 20:56:57.382139761 +0000 UTC m=+0.601631971 container remove bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:57 np0005481680 systemd[1]: Started libpod-conmon-d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51.scope.
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.312458387 +0000 UTC m=+0.051083970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:57 np0005481680 systemd[1]: libpod-conmon-bde44c2cd17ac0443c963122c32c36afec2802f1573a8ee64cb0e5b9232765e6.scope: Deactivated successfully.
Oct 12 16:56:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c73d9b2bd218e649cd6e2fe254d39cbaac8cbf3c96147be339240e1dcadd2e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c73d9b2bd218e649cd6e2fe254d39cbaac8cbf3c96147be339240e1dcadd2e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c73d9b2bd218e649cd6e2fe254d39cbaac8cbf3c96147be339240e1dcadd2e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.43518657 +0000 UTC m=+0.173812083 container init d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.440920387 +0000 UTC m=+0.179545870 container start d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.443716588 +0000 UTC m=+0.182342071 container attach d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 12 16:56:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:57 np0005481680 thirsty_margulis[87424]: Scheduled node-exporter update...
Oct 12 16:56:57 np0005481680 thirsty_margulis[87424]: Scheduled grafana update...
Oct 12 16:56:57 np0005481680 thirsty_margulis[87424]: Scheduled prometheus update...
Oct 12 16:56:57 np0005481680 thirsty_margulis[87424]: Scheduled alertmanager update...
Oct 12 16:56:57 np0005481680 podman[87538]: 2025-10-12 20:56:57.980870738 +0000 UTC m=+0.073534604 container create 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 16:56:57 np0005481680 systemd[1]: libpod-d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51.scope: Deactivated successfully.
Oct 12 16:56:57 np0005481680 podman[87397]: 2025-10-12 20:56:57.987237391 +0000 UTC m=+0.725862914 container died d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:56:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 12 16:56:58 np0005481680 systemd[1]: Started libpod-conmon-8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27.scope.
Oct 12 16:56:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9c73d9b2bd218e649cd6e2fe254d39cbaac8cbf3c96147be339240e1dcadd2e6-merged.mount: Deactivated successfully.
Oct 12 16:56:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 12 16:56:58 np0005481680 podman[87538]: 2025-10-12 20:56:57.94623568 +0000 UTC m=+0.038899596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:58 np0005481680 podman[87397]: 2025-10-12 20:56:58.047463894 +0000 UTC m=+0.786089417 container remove d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51 (image=quay.io/ceph/ceph:v19, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 12 16:56:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:58 np0005481680 systemd[1]: libpod-conmon-d0e6cb93a0044cd0f2b0f28bd805e62a9011a88221714949751626d881552e51.scope: Deactivated successfully.
Oct 12 16:56:58 np0005481680 podman[87538]: 2025-10-12 20:56:58.084359229 +0000 UTC m=+0.177023115 container init 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 16:56:58 np0005481680 podman[87538]: 2025-10-12 20:56:58.08949157 +0000 UTC m=+0.182155396 container start 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:58 np0005481680 podman[87538]: 2025-10-12 20:56:58.09261207 +0000 UTC m=+0.185275916 container attach 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:56:58 np0005481680 peaceful_franklin[87565]: 167 167
Oct 12 16:56:58 np0005481680 systemd[1]: libpod-8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27.scope: Deactivated successfully.
Oct 12 16:56:58 np0005481680 podman[87570]: 2025-10-12 20:56:58.13008526 +0000 UTC m=+0.021216335 container died 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 16:56:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c5d35f54a50bc854657201be1c604f85b1fdc5e3e30d2dd3edd0d1ea523fa882-merged.mount: Deactivated successfully.
Oct 12 16:56:58 np0005481680 podman[87570]: 2025-10-12 20:56:58.172595049 +0000 UTC m=+0.063726144 container remove 8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_franklin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 16:56:58 np0005481680 systemd[1]: libpod-conmon-8273a3d64032bb78633fb53241ca05839d7ee2d1164d403390ea27b9b2951b27.scope: Deactivated successfully.
Oct 12 16:56:58 np0005481680 podman[87592]: 2025-10-12 20:56:58.388521519 +0000 UTC m=+0.066082633 container create 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:58 np0005481680 systemd[1]: Started libpod-conmon-746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370.scope.
Oct 12 16:56:58 np0005481680 podman[87592]: 2025-10-12 20:56:58.362572525 +0000 UTC m=+0.040133749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:56:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b857fde56f76fe9efcdc71f28f96a965649e6e329f372b5f0eb61fc9b0b83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b857fde56f76fe9efcdc71f28f96a965649e6e329f372b5f0eb61fc9b0b83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b857fde56f76fe9efcdc71f28f96a965649e6e329f372b5f0eb61fc9b0b83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/322b857fde56f76fe9efcdc71f28f96a965649e6e329f372b5f0eb61fc9b0b83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v96: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:56:58 np0005481680 podman[87592]: 2025-10-12 20:56:58.495578662 +0000 UTC m=+0.173139866 container init 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 16:56:58 np0005481680 podman[87592]: 2025-10-12 20:56:58.509341885 +0000 UTC m=+0.186903009 container start 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:56:58 np0005481680 podman[87592]: 2025-10-12 20:56:58.512549507 +0000 UTC m=+0.190110671 container attach 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:58 np0005481680 python3[87638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:58 np0005481680 podman[87646]: 2025-10-12 20:56:58.753588791 +0000 UTC m=+0.044278276 container create 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:56:58 np0005481680 systemd[1]: Started libpod-conmon-87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7.scope.
Oct 12 16:56:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394737547c8ccab2116ba7b9fb097b220a160054a3e52ebca973965ea10c1f6d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394737547c8ccab2116ba7b9fb097b220a160054a3e52ebca973965ea10c1f6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394737547c8ccab2116ba7b9fb097b220a160054a3e52ebca973965ea10c1f6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:58 np0005481680 podman[87646]: 2025-10-12 20:56:58.731678839 +0000 UTC m=+0.022368304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: Saving service node-exporter spec with placement *
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: Saving service grafana spec with placement compute-0;count:1
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: Saving service prometheus spec with placement compute-0;count:1
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: Saving service alertmanager spec with placement compute-0;count:1
Oct 12 16:56:58 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:58 np0005481680 podman[87646]: 2025-10-12 20:56:58.837866299 +0000 UTC m=+0.128555834 container init 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 16:56:58 np0005481680 podman[87646]: 2025-10-12 20:56:58.846135251 +0000 UTC m=+0.136824686 container start 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Oct 12 16:56:58 np0005481680 podman[87646]: 2025-10-12 20:56:58.849320382 +0000 UTC m=+0.140009867 container attach 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:56:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 12 16:56:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 12 16:56:59 np0005481680 lvm[87747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:56:59 np0005481680 lvm[87747]: VG ceph_vg0 finished
Oct 12 16:56:59 np0005481680 ecstatic_lovelace[87609]: {}
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/384371714' entity='client.admin' 
Oct 12 16:56:59 np0005481680 systemd[1]: libpod-87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7.scope: Deactivated successfully.
Oct 12 16:56:59 np0005481680 systemd[1]: libpod-746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370.scope: Deactivated successfully.
Oct 12 16:56:59 np0005481680 systemd[1]: libpod-746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370.scope: Consumed 1.173s CPU time.
Oct 12 16:56:59 np0005481680 podman[87592]: 2025-10-12 20:56:59.255744994 +0000 UTC m=+0.933306148 container died 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:56:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-322b857fde56f76fe9efcdc71f28f96a965649e6e329f372b5f0eb61fc9b0b83-merged.mount: Deactivated successfully.
Oct 12 16:56:59 np0005481680 podman[87752]: 2025-10-12 20:56:59.290487023 +0000 UTC m=+0.040067627 container died 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 16:56:59 np0005481680 podman[87592]: 2025-10-12 20:56:59.320968584 +0000 UTC m=+0.998529718 container remove 746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lovelace, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:56:59 np0005481680 systemd[1]: libpod-conmon-746f56f46823f185c33fa75945a24052eeaf67a4edf0174deb5a7c071a756370.scope: Deactivated successfully.
Oct 12 16:56:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-394737547c8ccab2116ba7b9fb097b220a160054a3e52ebca973965ea10c1f6d-merged.mount: Deactivated successfully.
Oct 12 16:56:59 np0005481680 podman[87752]: 2025-10-12 20:56:59.346903708 +0000 UTC m=+0.096484242 container remove 87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7 (image=quay.io/ceph/ceph:v19, name=interesting_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 16:56:59 np0005481680 systemd[1]: libpod-conmon-87bbc183709ddde7d0c7dd990f824dd25f703127b2e7a1b7550dc9dd877147f7.scope: Deactivated successfully.
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:56:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:56:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:56:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:56:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:56:59 np0005481680 python3[87827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:56:59 np0005481680 podman[87851]: 2025-10-12 20:56:59.800907947 +0000 UTC m=+0.055463692 container create 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 16:56:59 np0005481680 systemd[1]: Started libpod-conmon-6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913.scope.
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/384371714' entity='client.admin' 
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:56:59 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:56:59 np0005481680 podman[87851]: 2025-10-12 20:56:59.772752736 +0000 UTC m=+0.027308531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:56:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:56:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f948fecaaf28fccde906aa8d9c4f7525f74058a9ea5ef56d2e7b6fd8dcdc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f948fecaaf28fccde906aa8d9c4f7525f74058a9ea5ef56d2e7b6fd8dcdc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853f948fecaaf28fccde906aa8d9c4f7525f74058a9ea5ef56d2e7b6fd8dcdc6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:56:59 np0005481680 podman[87851]: 2025-10-12 20:56:59.897627575 +0000 UTC m=+0.152183300 container init 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct 12 16:56:59 np0005481680 podman[87851]: 2025-10-12 20:56:59.907624181 +0000 UTC m=+0.162179886 container start 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:56:59 np0005481680 podman[87851]: 2025-10-12 20:56:59.910481644 +0000 UTC m=+0.165037349 container attach 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 16:56:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 12 16:56:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.188139857 +0000 UTC m=+0.064020022 container create 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 16:57:00 np0005481680 systemd[1]: Started libpod-conmon-2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e.scope.
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.161562626 +0000 UTC m=+0.037442841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.275937645 +0000 UTC m=+0.151817800 container init 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.286367752 +0000 UTC m=+0.162247887 container start 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.290102637 +0000 UTC m=+0.165982792 container attach 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:57:00 np0005481680 stoic_diffie[87947]: 167 167
Oct 12 16:57:00 np0005481680 systemd[1]: libpod-2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e.scope: Deactivated successfully.
Oct 12 16:57:00 np0005481680 conmon[87947]: conmon 2dce9f234472d54fcf6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e.scope/container/memory.events
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.294261604 +0000 UTC m=+0.170141739 container died 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3013140611' entity='client.admin' 
Oct 12 16:57:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c64bda042f8d56b144c49389f9ea3d37d90973004fe8e17e6fbbaac622334f7b-merged.mount: Deactivated successfully.
Oct 12 16:57:00 np0005481680 systemd[1]: libpod-6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913.scope: Deactivated successfully.
Oct 12 16:57:00 np0005481680 podman[87851]: 2025-10-12 20:57:00.336432814 +0000 UTC m=+0.590988519 container died 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 16:57:00 np0005481680 podman[87931]: 2025-10-12 20:57:00.359875815 +0000 UTC m=+0.235755950 container remove 2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e (image=quay.io/ceph/ceph:v19, name=stoic_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 16:57:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-853f948fecaaf28fccde906aa8d9c4f7525f74058a9ea5ef56d2e7b6fd8dcdc6-merged.mount: Deactivated successfully.
Oct 12 16:57:00 np0005481680 podman[87851]: 2025-10-12 20:57:00.402745153 +0000 UTC m=+0.657300858 container remove 6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913 (image=quay.io/ceph/ceph:v19, name=condescending_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 16:57:00 np0005481680 systemd[1]: libpod-conmon-2dce9f234472d54fcf6efb6bbaa4578a3ff73051eee8aa86e1e59480c25f467e.scope: Deactivated successfully.
Oct 12 16:57:00 np0005481680 systemd[1]: libpod-conmon-6d2da6df15f1fd72b1c47b889584fd6a41788018ec3ae9351a0a94a77b315913.scope: Deactivated successfully.
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fmjeht (monmap changed)...
Oct 12 16:57:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fmjeht (monmap changed)...
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:57:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:57:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v97: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:00 np0005481680 python3[88051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:00 np0005481680 podman[88054]: 2025-10-12 20:57:00.837438897 +0000 UTC m=+0.058578971 container create cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3013140611' entity='client.admin' 
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: Reconfiguring mgr.compute-0.fmjeht (monmap changed)...
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fmjeht", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:57:00 np0005481680 ceph-mon[73608]: Reconfiguring daemon mgr.compute-0.fmjeht on compute-0
Oct 12 16:57:00 np0005481680 systemd[1]: Started libpod-conmon-cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65.scope.
Oct 12 16:57:00 np0005481680 podman[88054]: 2025-10-12 20:57:00.810773124 +0000 UTC m=+0.031913258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c22f42b073a5725aeb9f81b26dcb9a3a07b69159b76d410ed0b5c4f13911572/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c22f42b073a5725aeb9f81b26dcb9a3a07b69159b76d410ed0b5c4f13911572/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c22f42b073a5725aeb9f81b26dcb9a3a07b69159b76d410ed0b5c4f13911572/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:00 np0005481680 podman[88054]: 2025-10-12 20:57:00.952853594 +0000 UTC m=+0.173993678 container init cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:00 np0005481680 podman[88054]: 2025-10-12 20:57:00.960037108 +0000 UTC m=+0.181177192 container start cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:00 np0005481680 podman[88054]: 2025-10-12 20:57:00.974094528 +0000 UTC m=+0.195234612 container attach cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 12 16:57:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 12 16:57:00 np0005481680 podman[88083]: 2025-10-12 20:57:00.993475025 +0000 UTC m=+0.088198330 container create 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:00.94448907 +0000 UTC m=+0.039212435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:01 np0005481680 systemd[1]: Started libpod-conmon-10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca.scope.
Oct 12 16:57:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:01.090718125 +0000 UTC m=+0.185441460 container init 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:01.101999994 +0000 UTC m=+0.196723309 container start 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:01 np0005481680 zealous_volhard[88102]: 167 167
Oct 12 16:57:01 np0005481680 systemd[1]: libpod-10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca.scope: Deactivated successfully.
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:01.108400268 +0000 UTC m=+0.203123623 container attach 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:01.109176568 +0000 UTC m=+0.203899933 container died 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 16:57:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-eacd125d713841529ca99a1be939d43612e2f4e6c0a4e38ed1ea6695025f9255-merged.mount: Deactivated successfully.
Oct 12 16:57:01 np0005481680 podman[88083]: 2025-10-12 20:57:01.184683892 +0000 UTC m=+0.279407207 container remove 10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca (image=quay.io/ceph/ceph:v19, name=zealous_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 16:57:01 np0005481680 systemd[1]: libpod-conmon-10f4e565b7d84444a78d6a754626ba04375fa4aaf3c3ff6de865314d0e4c4bca.scope: Deactivated successfully.
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:01 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct 12 16:57:01 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:01 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct 12 16:57:01 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' 
Oct 12 16:57:01 np0005481680 systemd[1]: libpod-cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65.scope: Deactivated successfully.
Oct 12 16:57:01 np0005481680 podman[88054]: 2025-10-12 20:57:01.357662673 +0000 UTC m=+0.578802747 container died cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8c22f42b073a5725aeb9f81b26dcb9a3a07b69159b76d410ed0b5c4f13911572-merged.mount: Deactivated successfully.
Oct 12 16:57:01 np0005481680 podman[88054]: 2025-10-12 20:57:01.410221529 +0000 UTC m=+0.631361613 container remove cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65 (image=quay.io/ceph/ceph:v19, name=objective_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 16:57:01 np0005481680 systemd[1]: libpod-conmon-cdc7e4eae1ed6f701abdf4699b96179543309e55c4ab6565d94a63b029afab65.scope: Deactivated successfully.
Oct 12 16:57:01 np0005481680 podman[88220]: 2025-10-12 20:57:01.855987408 +0000 UTC m=+0.065622262 container create a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 12 16:57:01 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.admin' 
Oct 12 16:57:01 np0005481680 systemd[1]: Started libpod-conmon-a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4.scope.
Oct 12 16:57:01 np0005481680 podman[88220]: 2025-10-12 20:57:01.828712029 +0000 UTC m=+0.038346903 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:01 np0005481680 podman[88220]: 2025-10-12 20:57:01.9564617 +0000 UTC m=+0.166096595 container init a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:01 np0005481680 podman[88220]: 2025-10-12 20:57:01.966452257 +0000 UTC m=+0.176087101 container start a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:01 np0005481680 condescending_mestorf[88260]: 167 167
Oct 12 16:57:01 np0005481680 podman[88220]: 2025-10-12 20:57:01.97166359 +0000 UTC m=+0.181298444 container attach a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 16:57:01 np0005481680 systemd[1]: libpod-a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4.scope: Deactivated successfully.
Oct 12 16:57:01 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 12 16:57:02 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 12 16:57:02 np0005481680 podman[88267]: 2025-10-12 20:57:02.04311661 +0000 UTC m=+0.046894741 container died a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 16:57:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-62bc8828e39c0f8eae6a97bc2fe161e49f8684832263a88bb8bac0794576145b-merged.mount: Deactivated successfully.
Oct 12 16:57:02 np0005481680 podman[88267]: 2025-10-12 20:57:02.086546053 +0000 UTC m=+0.090324184 container remove a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:02 np0005481680 systemd[1]: libpod-conmon-a11a22d335c9a6d8341139dfa376b65dca151fc6a9cae872d7947e8c470d3ee4.scope: Deactivated successfully.
Oct 12 16:57:02 np0005481680 python3[88264]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:02 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct 12 16:57:02 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:02 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Oct 12 16:57:02 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Oct 12 16:57:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.737518727 +0000 UTC m=+0.074318024 container create 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:02 np0005481680 python3[88382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.fmjeht/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.69936182 +0000 UTC m=+0.036161167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:02 np0005481680 systemd[1]: Started libpod-conmon-15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40.scope.
Oct 12 16:57:02 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.844494368 +0000 UTC m=+0.181293705 container init 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.855814128 +0000 UTC m=+0.192613425 container start 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:02 np0005481680 awesome_kalam[88409]: 167 167
Oct 12 16:57:02 np0005481680 podman[88401]: 2025-10-12 20:57:02.860576049 +0000 UTC m=+0.073618356 container create 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:02 np0005481680 systemd[1]: libpod-15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40.scope: Deactivated successfully.
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.869130938 +0000 UTC m=+0.205930275 container attach 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.870109224 +0000 UTC m=+0.206908521 container died 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: Reconfiguring osd.0 (monmap changed)...
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 12 16:57:02 np0005481680 ceph-mon[73608]: Reconfiguring daemon osd.0 on compute-0
Oct 12 16:57:02 np0005481680 systemd[1]: Started libpod-conmon-38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8.scope.
Oct 12 16:57:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6e24b72934f86944a6863d6b49d364a2c860b8cc48397e66ed4dd334a1ff6aca-merged.mount: Deactivated successfully.
Oct 12 16:57:02 np0005481680 podman[88401]: 2025-10-12 20:57:02.830630703 +0000 UTC m=+0.043673050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:02 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:02 np0005481680 podman[88387]: 2025-10-12 20:57:02.94568434 +0000 UTC m=+0.282483637 container remove 15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc719323c4e7a115fb3ac734f71907c5e0725fee065fcb47ee60ae0d76619194/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc719323c4e7a115fb3ac734f71907c5e0725fee065fcb47ee60ae0d76619194/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc719323c4e7a115fb3ac734f71907c5e0725fee065fcb47ee60ae0d76619194/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:02 np0005481680 systemd[1]: libpod-conmon-15c1ed4c992aef427e57bd248b780651e0d354ddf938e48177b6c4f333681e40.scope: Deactivated successfully.
Oct 12 16:57:02 np0005481680 podman[88401]: 2025-10-12 20:57:02.969784237 +0000 UTC m=+0.182826544 container init 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:02 np0005481680 podman[88401]: 2025-10-12 20:57:02.978704716 +0000 UTC m=+0.191746983 container start 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 16:57:02 np0005481680 podman[88401]: 2025-10-12 20:57:02.981984749 +0000 UTC m=+0.195027016 container attach 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 16:57:03 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 12 16:57:03 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.fmjeht/server_addr}] v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1803390664' entity='client.admin' 
Oct 12 16:57:03 np0005481680 systemd[1]: libpod-38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8.scope: Deactivated successfully.
Oct 12 16:57:03 np0005481680 podman[88401]: 2025-10-12 20:57:03.384099879 +0000 UTC m=+0.597142166 container died 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 16:57:03 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bc719323c4e7a115fb3ac734f71907c5e0725fee065fcb47ee60ae0d76619194-merged.mount: Deactivated successfully.
Oct 12 16:57:03 np0005481680 podman[88401]: 2025-10-12 20:57:03.42901593 +0000 UTC m=+0.642058187 container remove 38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8 (image=quay.io/ceph/ceph:v19, name=epic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:03 np0005481680 systemd[1]: libpod-conmon-38d6f974bd4c694d98590ede9a59c9ca5ea699401e75b28bd5038aa712587de8.scope: Deactivated successfully.
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Oct 12 16:57:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Oct 12 16:57:04 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Oct 12 16:57:04 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1803390664' entity='client.admin' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 12 16:57:04 np0005481680 python3[88504]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.orllvh/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:04 np0005481680 podman[88505]: 2025-10-12 20:57:04.470230081 +0000 UTC m=+0.069036809 container create 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:57:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:04 np0005481680 systemd[1]: Started libpod-conmon-03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4.scope.
Oct 12 16:57:04 np0005481680 podman[88505]: 2025-10-12 20:57:04.441973747 +0000 UTC m=+0.040780515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee25849bff58adc88fcf55f7b7259bb88049c8bfe9cb75f09bac9e338c4b023b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee25849bff58adc88fcf55f7b7259bb88049c8bfe9cb75f09bac9e338c4b023b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee25849bff58adc88fcf55f7b7259bb88049c8bfe9cb75f09bac9e338c4b023b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:04 np0005481680 podman[88505]: 2025-10-12 20:57:04.578668858 +0000 UTC m=+0.177475586 container init 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:04 np0005481680 podman[88505]: 2025-10-12 20:57:04.588467609 +0000 UTC m=+0.187274337 container start 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:57:04 np0005481680 podman[88505]: 2025-10-12 20:57:04.593256522 +0000 UTC m=+0.192063310 container attach 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:04 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct 12 16:57:04 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:04 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct 12 16:57:04 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct 12 16:57:05 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Oct 12 16:57:05 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.orllvh/server_addr}] v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1383668288' entity='client.admin' 
Oct 12 16:57:05 np0005481680 systemd[1]: libpod-03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4.scope: Deactivated successfully.
Oct 12 16:57:05 np0005481680 podman[88505]: 2025-10-12 20:57:05.073932074 +0000 UTC m=+0.672738792 container died 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ee25849bff58adc88fcf55f7b7259bb88049c8bfe9cb75f09bac9e338c4b023b-merged.mount: Deactivated successfully.
Oct 12 16:57:05 np0005481680 podman[88505]: 2025-10-12 20:57:05.126974013 +0000 UTC m=+0.725780741 container remove 03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 16:57:05 np0005481680 systemd[1]: libpod-conmon-03697e037dc65bbeadc250463d6447f21af21603ed4bf2933c71a3c54934f7f4.scope: Deactivated successfully.
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: Reconfiguring osd.1 (monmap changed)...
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: Reconfiguring daemon osd.1 on compute-1
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1383668288' entity='client.admin' 
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct 12 16:57:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct 12 16:57:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct 12 16:57:06 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 12 16:57:06 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 12 16:57:06 np0005481680 python3[88582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.iamnla/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:06 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.iamnla (monmap changed)...
Oct 12 16:57:06 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.iamnla (monmap changed)...
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:06 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:57:06 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.312276503 +0000 UTC m=+0.053514101 container create cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:06 np0005481680 systemd[1]: Started libpod-conmon-cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e.scope.
Oct 12 16:57:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e52ea4a153758eebc6d6e4de2849d94a6b9fb0a3fcf3df839322853f8779a1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e52ea4a153758eebc6d6e4de2849d94a6b9fb0a3fcf3df839322853f8779a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e52ea4a153758eebc6d6e4de2849d94a6b9fb0a3fcf3df839322853f8779a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.293573034 +0000 UTC m=+0.034810732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.403436909 +0000 UTC m=+0.144674517 container init cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.413446705 +0000 UTC m=+0.154684343 container start cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.417394247 +0000 UTC m=+0.158631995 container attach cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:57:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.iamnla/server_addr}] v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3606509013' entity='client.admin' 
Oct 12 16:57:06 np0005481680 systemd[1]: libpod-cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e.scope: Deactivated successfully.
Oct 12 16:57:06 np0005481680 podman[88583]: 2025-10-12 20:57:06.891471999 +0000 UTC m=+0.632709627 container died cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:07 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 12 16:57:07 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 12 16:57:07 np0005481680 systemd[1]: var-lib-containers-storage-overlay-86e52ea4a153758eebc6d6e4de2849d94a6b9fb0a3fcf3df839322853f8779a1-merged.mount: Deactivated successfully.
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:07 np0005481680 podman[88583]: 2025-10-12 20:57:07.095364692 +0000 UTC m=+0.836602330 container remove cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e (image=quay.io/ceph/ceph:v19, name=sweet_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:07 np0005481680 systemd[1]: libpod-conmon-cbdd077aa4a3e3f608e549ab4ca57030592b7099e8b832aba89177c7ee854a5e.scope: Deactivated successfully.
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: Reconfiguring mgr.compute-2.iamnla (monmap changed)...
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iamnla", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: Reconfiguring daemon mgr.compute-2.iamnla on compute-2
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3606509013' entity='client.admin' 
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:07 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:57:07 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:57:07 np0005481680 python3[88661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:07 np0005481680 podman[88662]: 2025-10-12 20:57:07.654831702 +0000 UTC m=+0.064703228 container create 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 12 16:57:07 np0005481680 systemd[1]: Started libpod-conmon-70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4.scope.
Oct 12 16:57:07 np0005481680 podman[88662]: 2025-10-12 20:57:07.627605865 +0000 UTC m=+0.037477441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce629299ae8efc7f30cdf41f3b2a8cbd34653424e92109dac425cc589e22172/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce629299ae8efc7f30cdf41f3b2a8cbd34653424e92109dac425cc589e22172/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce629299ae8efc7f30cdf41f3b2a8cbd34653424e92109dac425cc589e22172/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:07 np0005481680 podman[88662]: 2025-10-12 20:57:07.75861312 +0000 UTC m=+0.168484646 container init 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 16:57:07 np0005481680 podman[88662]: 2025-10-12 20:57:07.76988239 +0000 UTC m=+0.179753926 container start 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:07 np0005481680 podman[88662]: 2025-10-12 20:57:07.775046622 +0000 UTC m=+0.184918158 container attach 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 16:57:08 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 12 16:57:08 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3691716064' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3691716064' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3691716064' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 12 16:57:08 np0005481680 stoic_curran[88677]: module 'dashboard' is already disabled
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.fmjeht(active, since 2m), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:08 np0005481680 systemd[1]: libpod-70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4.scope: Deactivated successfully.
Oct 12 16:57:08 np0005481680 podman[88662]: 2025-10-12 20:57:08.413523576 +0000 UTC m=+0.823395112 container died 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7ce629299ae8efc7f30cdf41f3b2a8cbd34653424e92109dac425cc589e22172-merged.mount: Deactivated successfully.
Oct 12 16:57:08 np0005481680 podman[88662]: 2025-10-12 20:57:08.467194451 +0000 UTC m=+0.877065987 container remove 70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4 (image=quay.io/ceph/ceph:v19, name=stoic_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:08 np0005481680 systemd[1]: libpod-conmon-70c19584dc710a100fe2b20b851e722cf9458e0384f04f2f0d58bc12aada33c4.scope: Deactivated successfully.
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v101: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:08 np0005481680 python3[88741]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:08 np0005481680 podman[88742]: 2025-10-12 20:57:08.922394331 +0000 UTC m=+0.041093124 container create bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:08 np0005481680 systemd[1]: Started libpod-conmon-bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e.scope.
Oct 12 16:57:08 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85291f956397ad68ba386e085bafbafda3fc962729cbdad6fa3c838fe6ba2c04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85291f956397ad68ba386e085bafbafda3fc962729cbdad6fa3c838fe6ba2c04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85291f956397ad68ba386e085bafbafda3fc962729cbdad6fa3c838fe6ba2c04/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:09 np0005481680 podman[88742]: 2025-10-12 20:57:08.905381634 +0000 UTC m=+0.024080527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:09 np0005481680 podman[88742]: 2025-10-12 20:57:09.003255821 +0000 UTC m=+0.121954654 container init bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:09 np0005481680 podman[88742]: 2025-10-12 20:57:09.01216624 +0000 UTC m=+0.130865063 container start bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:09 np0005481680 podman[88742]: 2025-10-12 20:57:09.015802343 +0000 UTC m=+0.134501186 container attach bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:09 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 12 16:57:09 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3691716064' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: from='mgr.14122 192.168.122.100:0/194366492' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2551266411' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 12 16:57:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:09 np0005481680 podman[88871]: 2025-10-12 20:57:09.94953929 +0000 UTC m=+0.070321842 container create 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:09 np0005481680 systemd[1]: Started libpod-conmon-16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de.scope.
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:09.920408254 +0000 UTC m=+0.041190886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:10.044492643 +0000 UTC m=+0.165275225 container init 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:10.051185634 +0000 UTC m=+0.171968186 container start 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:10 np0005481680 happy_hypatia[88887]: 167 167
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:10.056014967 +0000 UTC m=+0.176797609 container attach 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:10.057850275 +0000 UTC m=+0.178632857 container died 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:10 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 12 16:57:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4461a9d29c27100cd9d4dfbf26ff7cf1a8f15c2c790257d5c8050b112fb8b200-merged.mount: Deactivated successfully.
Oct 12 16:57:10 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 12 16:57:10 np0005481680 podman[88871]: 2025-10-12 20:57:10.107183609 +0000 UTC m=+0.227966181 container remove 16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hypatia, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-conmon-16f1100235a5e6da7011dc8f8148aadc8d9b4d7f3a3e15049f789fce02afe0de.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 podman[88911]: 2025-10-12 20:57:10.330191801 +0000 UTC m=+0.067064929 container create ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 16:57:10 np0005481680 systemd[1]: Started libpod-conmon-ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9.scope.
Oct 12 16:57:10 np0005481680 podman[88911]: 2025-10-12 20:57:10.303603629 +0000 UTC m=+0.040476827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:10 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2551266411' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 12 16:57:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:10 np0005481680 podman[88911]: 2025-10-12 20:57:10.43091087 +0000 UTC m=+0.167784038 container init ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2551266411' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  1: '-n'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  2: 'mgr.compute-0.fmjeht'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  3: '-f'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  4: '--setuser'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  5: 'ceph'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  6: '--setgroup'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  7: 'ceph'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  8: '--default-log-to-file=false'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  9: '--default-log-to-journald=true'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr respawn  exe_path /proc/self/exe
Oct 12 16:57:10 np0005481680 podman[88911]: 2025-10-12 20:57:10.448894711 +0000 UTC m=+0.185767849 container start ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 16:57:10 np0005481680 podman[88911]: 2025-10-12 20:57:10.454521015 +0000 UTC m=+0.191394153 container attach ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:10 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.fmjeht(active, since 2m), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 podman[88742]: 2025-10-12 20:57:10.485340195 +0000 UTC m=+1.604039028 container died bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 16:57:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-85291f956397ad68ba386e085bafbafda3fc962729cbdad6fa3c838fe6ba2c04-merged.mount: Deactivated successfully.
Oct 12 16:57:10 np0005481680 podman[88742]: 2025-10-12 20:57:10.537214143 +0000 UTC m=+1.655912966 container remove bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e (image=quay.io/ceph/ceph:v19, name=frosty_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-conmon-bb9d3f4a92ab8c3d973151c841c1f5d6649a04baaabdddce5e4782e4fd90830e.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-24.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-27.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 27 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 24 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 34 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 33 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd[1]: session-31.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-26.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-29.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-25.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 31 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd[1]: session-22.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-32.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 26 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setuser ceph since I am not root
Oct 12 16:57:10 np0005481680 systemd[1]: session-30.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setgroup ceph since I am not root
Oct 12 16:57:10 np0005481680 systemd[1]: session-33.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 29 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 22 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 32 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 25 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 30 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 24.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 27.
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:57:10 np0005481680 systemd[1]: session-28.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Session 28 logged out. Waiting for processes to exit.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 33.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 31.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 26.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 29.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 25.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 22.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 32.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 30.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 28.
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:57:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:10.748+0000 7fc664ca4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:57:10 np0005481680 sharp_sinoussi[88928]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:57:10 np0005481680 sharp_sinoussi[88928]: --> All data devices are unavailable
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:10.825+0000 7fc664ca4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:10 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:57:10 np0005481680 podman[88975]: 2025-10-12 20:57:10.84196463 +0000 UTC m=+0.027886476 container died ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cff727e5345435152b9b59fcc4c8ca594243ee7cf2baf74a89e26f1c62c5c911-merged.mount: Deactivated successfully.
Oct 12 16:57:10 np0005481680 podman[88975]: 2025-10-12 20:57:10.900543339 +0000 UTC m=+0.086465135 container remove ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_sinoussi, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:57:10 np0005481680 systemd[1]: libpod-conmon-ad3c4233f9fef3dc4ade3733657b7d981ad4b818d229308756950316fa9bf6f9.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-34.scope: Deactivated successfully.
Oct 12 16:57:10 np0005481680 systemd[1]: session-34.scope: Consumed 31.867s CPU time.
Oct 12 16:57:10 np0005481680 systemd-logind[783]: Removed session 34.
Oct 12 16:57:11 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 12 16:57:11 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 12 16:57:11 np0005481680 python3[89012]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:11 np0005481680 podman[89013]: 2025-10-12 20:57:11.14104226 +0000 UTC m=+0.056318063 container create a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:11 np0005481680 systemd[1]: Started libpod-conmon-a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea.scope.
Oct 12 16:57:11 np0005481680 podman[89013]: 2025-10-12 20:57:11.122607218 +0000 UTC m=+0.037883051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:11 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:11 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1b7b10884086e502feca8d5cd3a859f1f3c2a6ff292b1f190541da307bb587/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:11 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1b7b10884086e502feca8d5cd3a859f1f3c2a6ff292b1f190541da307bb587/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:11 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1b7b10884086e502feca8d5cd3a859f1f3c2a6ff292b1f190541da307bb587/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:11 np0005481680 podman[89013]: 2025-10-12 20:57:11.24060783 +0000 UTC m=+0.155883633 container init a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:57:11 np0005481680 podman[89013]: 2025-10-12 20:57:11.246539333 +0000 UTC m=+0.161815146 container start a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:11 np0005481680 podman[89013]: 2025-10-12 20:57:11.249913489 +0000 UTC m=+0.165189312 container attach a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:57:11 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/2551266411' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 12 16:57:11 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:57:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:11.584+0000 7fc664ca4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:11 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:11 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:57:12 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 12 16:57:12 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:12.166+0000 7fc664ca4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:12.317+0000 7fc664ca4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:12.382+0000 7fc664ca4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:57:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:12.508+0000 7fc664ca4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:57:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:57:13 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 12 16:57:13 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.466+0000 7fc664ca4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.670+0000 7fc664ca4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.742+0000 7fc664ca4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.804+0000 7fc664ca4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.877+0000 7fc664ca4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:57:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:13.942+0000 7fc664ca4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:57:14 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct 12 16:57:14 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct 12 16:57:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:14.275+0000 7fc664ca4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:57:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:14.375+0000 7fc664ca4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:57:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:14.795+0000 7fc664ca4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:57:15 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 12 16:57:15 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.325+0000 7fc664ca4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.395+0000 7fc664ca4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.469+0000 7fc664ca4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.622+0000 7fc664ca4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.696+0000 7fc664ca4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:57:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:15.865+0000 7fc664ca4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:57:16 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 12 16:57:16 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 12 16:57:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:16.085+0000 7fc664ca4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla restarted
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla started
Oct 12 16:57:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:16.361+0000 7fc664ca4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh restarted
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh started
Oct 12 16:57:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:16.425+0000 7fc664ca4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x56043f377860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.fmjeht(active, starting, since 0.0316137s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e1 all = 1
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmjeht is now available
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:57:16
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: cephadm
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: dashboard
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO sso] Loading SSO DB version=1
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fc5e1ffde50>, <progress.module.GhostEvent object at 0x7fc5e1ffde80>, <progress.module.GhostEvent object at 0x7fc5e1ffdeb0>, <progress.module.GhostEvent object at 0x7fc5e1ffdee0>, <progress.module.GhostEvent object at 0x7fc5e1ffdf10>, <progress.module.GhostEvent object at 0x7fc5e1ffdf40>, <progress.module.GhostEvent object at 0x7fc5e1ffdf70>, <progress.module.GhostEvent object at 0x7fc5e1ffdfa0>, <progress.module.GhostEvent object at 0x7fc5e1ffdfd0>, <progress.module.GhostEvent object at 0x7fc5e1fff040>] historic events
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"} v 0)
Oct 12 16:57:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 12 16:57:16 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 12 16:57:16 np0005481680 systemd-logind[783]: New session 35 of user ceph-admin.
Oct 12 16:57:16 np0005481680 systemd[1]: Started Session 35 of User ceph-admin.
Oct 12 16:57:17 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.module] Engine started.
Oct 12 16:57:17 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 12 16:57:17 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: Manager daemon compute-0.fmjeht is now available
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.fmjeht(active, since 1.1216s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct 12 16:57:17 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:17 np0005481680 kind_clarke[89034]: Option GRAFANA_API_USERNAME updated
Oct 12 16:57:17 np0005481680 systemd[1]: libpod-a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea.scope: Deactivated successfully.
Oct 12 16:57:17 np0005481680 podman[89286]: 2025-10-12 20:57:17.653177616 +0000 UTC m=+0.039030481 container died a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 16:57:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-aa1b7b10884086e502feca8d5cd3a859f1f3c2a6ff292b1f190541da307bb587-merged.mount: Deactivated successfully.
Oct 12 16:57:17 np0005481680 podman[89286]: 2025-10-12 20:57:17.700682493 +0000 UTC m=+0.086535328 container remove a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea (image=quay.io/ceph/ceph:v19, name=kind_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:17 np0005481680 systemd[1]: libpod-conmon-a5d6ed4fc9ed8c2292561ca5642f5612a436378d63169f461bc9cb467fbd38ea.scope: Deactivated successfully.
Oct 12 16:57:17 np0005481680 podman[89328]: 2025-10-12 20:57:17.850704315 +0000 UTC m=+0.060115581 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:17 np0005481680 podman[89328]: 2025-10-12 20:57:17.943689558 +0000 UTC m=+0.153100844 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:18 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 12 16:57:18 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 12 16:57:18 np0005481680 python3[89374]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.183860469 +0000 UTC m=+0.053846220 container create 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:18 np0005481680 systemd[1]: Started libpod-conmon-742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6.scope.
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:18] ENGINE Bus STARTING
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:18] ENGINE Bus STARTING
Oct 12 16:57:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcd3b3dfe29b2524587a582fa978d5ef82d690f8eb921cfe251445d84b6919e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcd3b3dfe29b2524587a582fa978d5ef82d690f8eb921cfe251445d84b6919e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcd3b3dfe29b2524587a582fa978d5ef82d690f8eb921cfe251445d84b6919e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.167419399 +0000 UTC m=+0.037405150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.263261473 +0000 UTC m=+0.133247254 container init 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.274782978 +0000 UTC m=+0.144768729 container start 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.278224027 +0000 UTC m=+0.148209778 container attach 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:18] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:18] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:18] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:18] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:18] ENGINE Bus STARTED
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:18] ENGINE Bus STARTED
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:18] ENGINE Client ('192.168.122.100', 48920) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:18] ENGINE Client ('192.168.122.100', 48920) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 16:57:18 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct 12 16:57:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:18 np0005481680 xenodochial_hodgkin[89436]: Option GRAFANA_API_PASSWORD updated
Oct 12 16:57:18 np0005481680 systemd[1]: libpod-742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6.scope: Deactivated successfully.
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.653784536 +0000 UTC m=+0.523770287 container died 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6fcd3b3dfe29b2524587a582fa978d5ef82d690f8eb921cfe251445d84b6919e-merged.mount: Deactivated successfully.
Oct 12 16:57:18 np0005481680 podman[89406]: 2025-10-12 20:57:18.695215158 +0000 UTC m=+0.565200909 container remove 742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6 (image=quay.io/ceph/ceph:v19, name=xenodochial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 16:57:18 np0005481680 systemd[1]: libpod-conmon-742b274f2d6638915ae07198282fdd7b88da4f2b9ec75760787a081c8ea15ab6.scope: Deactivated successfully.
Oct 12 16:57:19 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 12 16:57:19 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 12 16:57:19 np0005481680 python3[89615]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.175451299 +0000 UTC m=+0.048623657 container create 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:19 np0005481680 systemd[1]: Started libpod-conmon-3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5.scope.
Oct 12 16:57:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d0a14b4af65b8d8f56c3debc5434eb6e8beb9cfe3d94492d41d1aebc09d38e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d0a14b4af65b8d8f56c3debc5434eb6e8beb9cfe3d94492d41d1aebc09d38e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d0a14b4af65b8d8f56c3debc5434eb6e8beb9cfe3d94492d41d1aebc09d38e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.241003697 +0000 UTC m=+0.114176095 container init 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.149630217 +0000 UTC m=+0.022802625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.246719854 +0000 UTC m=+0.119892212 container start 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.251895537 +0000 UTC m=+0.125067975 container attach 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.fmjeht(active, since 3s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:18] ENGINE Bus STARTING
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:18] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:18] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:18] ENGINE Bus STARTED
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:18] ENGINE Client ('192.168.122.100', 48920) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 zealous_joliot[89645]: Option ALERTMANAGER_API_HOST updated
Oct 12 16:57:19 np0005481680 systemd[1]: libpod-3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5.scope: Deactivated successfully.
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.659484397 +0000 UTC m=+0.532656755 container died 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 16:57:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c8d0a14b4af65b8d8f56c3debc5434eb6e8beb9cfe3d94492d41d1aebc09d38e-merged.mount: Deactivated successfully.
Oct 12 16:57:19 np0005481680 podman[89630]: 2025-10-12 20:57:19.700128168 +0000 UTC m=+0.573300516 container remove 3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5 (image=quay.io/ceph/ceph:v19, name=zealous_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:57:19 np0005481680 systemd[1]: libpod-conmon-3101f802e0be646843605669a8a361527708cbca4fb3468f8949008aeb8979a5.scope: Deactivated successfully.
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:57:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:20 np0005481680 python3[89797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:20 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 12 16:57:20 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.131735073 +0000 UTC m=+0.067283335 container create 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:20 np0005481680 systemd[1]: Started libpod-conmon-614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9.scope.
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.102828223 +0000 UTC m=+0.038376525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389cb364855378f6a831ed5cec09dad59124c8f39f174cef1c03b6c2f539e128/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389cb364855378f6a831ed5cec09dad59124c8f39f174cef1c03b6c2f539e128/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389cb364855378f6a831ed5cec09dad59124c8f39f174cef1c03b6c2f539e128/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.217244174 +0000 UTC m=+0.152792476 container init 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.22922269 +0000 UTC m=+0.164770952 container start 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.234750522 +0000 UTC m=+0.170298774 container attach 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14403 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:20 np0005481680 quizzical_joliot[89889]: Option PROMETHEUS_API_HOST updated
Oct 12 16:57:20 np0005481680 systemd[1]: libpod-614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9.scope: Deactivated successfully.
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.641267314 +0000 UTC m=+0.576815576 container died 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-389cb364855378f6a831ed5cec09dad59124c8f39f174cef1c03b6c2f539e128-merged.mount: Deactivated successfully.
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:20 np0005481680 podman[89838]: 2025-10-12 20:57:20.707120401 +0000 UTC m=+0.642668623 container remove 614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9 (image=quay.io/ceph/ceph:v19, name=quizzical_joliot, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:20 np0005481680 systemd[1]: libpod-conmon-614064850c276ee00a05118005188489537520f90894ad19b666ac80c2fddef9.scope: Deactivated successfully.
Oct 12 16:57:20 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.fmjeht(active, since 4s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:21 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 12 16:57:21 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 12 16:57:21 np0005481680 python3[90174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.230381585 +0000 UTC m=+0.078447021 container create bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 systemd[1]: Started libpod-conmon-bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f.scope.
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.203958618 +0000 UTC m=+0.052024124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6d6fc8b2d97bb897963473089f41ad3ccceec906cea1a9830e8f581cb1d8f9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6d6fc8b2d97bb897963473089f41ad3ccceec906cea1a9830e8f581cb1d8f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6d6fc8b2d97bb897963473089f41ad3ccceec906cea1a9830e8f581cb1d8f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.325285706 +0000 UTC m=+0.173351122 container init bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.336197906 +0000 UTC m=+0.184263312 container start bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.33992795 +0000 UTC m=+0.187993426 container attach bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-0 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:21 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 12 16:57:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:21 np0005481680 friendly_galileo[90289]: Option GRAFANA_API_URL updated
Oct 12 16:57:21 np0005481680 systemd[1]: libpod-bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f.scope: Deactivated successfully.
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.73692781 +0000 UTC m=+0.584993216 container died bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:21 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8d6d6fc8b2d97bb897963473089f41ad3ccceec906cea1a9830e8f581cb1d8f9-merged.mount: Deactivated successfully.
Oct 12 16:57:21 np0005481680 podman[90225]: 2025-10-12 20:57:21.789208469 +0000 UTC m=+0.637273875 container remove bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f (image=quay.io/ceph/ceph:v19, name=friendly_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:21 np0005481680 systemd[1]: libpod-conmon-bb8b0d191a108ab7b89c7fe402985c1be9be8a04149627da34016206a435b17f.scope: Deactivated successfully.
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 python3[90572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:22 np0005481680 podman[90602]: 2025-10-12 20:57:22.217011427 +0000 UTC m=+0.061670500 container create 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:22 np0005481680 systemd[1]: Started libpod-conmon-38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649.scope.
Oct 12 16:57:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:22 np0005481680 podman[90602]: 2025-10-12 20:57:22.191150904 +0000 UTC m=+0.035810057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5458dd71fbff0f12086596f85c2d025021f289b1d24a1257cebf857848a8723/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5458dd71fbff0f12086596f85c2d025021f289b1d24a1257cebf857848a8723/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5458dd71fbff0f12086596f85c2d025021f289b1d24a1257cebf857848a8723/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:22 np0005481680 podman[90602]: 2025-10-12 20:57:22.308521391 +0000 UTC m=+0.153180514 container init 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:22 np0005481680 podman[90602]: 2025-10-12 20:57:22.320378185 +0000 UTC m=+0.165037268 container start 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:22 np0005481680 podman[90602]: 2025-10-12 20:57:22.324342696 +0000 UTC m=+0.169001859 container attach 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1616969784' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:57:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 3519febe-882b-40ae-b75e-9b7997d6b00d (Updating node-exporter deployment (+3 -> 3))
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct 12 16:57:22 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1616969784' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: from='mgr.14358 192.168.122.100:0/3451409690' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:23 np0005481680 systemd[1]: Reloading.
Oct 12 16:57:23 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:57:23 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1616969784' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  1: '-n'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  2: 'mgr.compute-0.fmjeht'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  3: '-f'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  4: '--setuser'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  5: 'ceph'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  6: '--setgroup'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  7: 'ceph'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  8: '--default-log-to-file=false'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  9: '--default-log-to-journald=true'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr respawn  exe_path /proc/self/exe
Oct 12 16:57:23 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.fmjeht(active, since 7s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:23 np0005481680 podman[90990]: 2025-10-12 20:57:23.832172008 +0000 UTC m=+0.043347951 container died 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct 12 16:57:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setuser ceph since I am not root
Oct 12 16:57:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setgroup ceph since I am not root
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:57:23 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:57:23 np0005481680 systemd[1]: libpod-38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649.scope: Deactivated successfully.
Oct 12 16:57:23 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a5458dd71fbff0f12086596f85c2d025021f289b1d24a1257cebf857848a8723-merged.mount: Deactivated successfully.
Oct 12 16:57:23 np0005481680 systemd-logind[783]: Session 35 logged out. Waiting for processes to exit.
Oct 12 16:57:23 np0005481680 podman[90990]: 2025-10-12 20:57:23.941643183 +0000 UTC m=+0.152819106 container remove 38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649 (image=quay.io/ceph/ceph:v19, name=infallible_engelbart, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 16:57:23 np0005481680 systemd[1]: libpod-conmon-38f5093edf7a9ac0c74a5e43db1276008357ffc3d51a47083de618f0f540c649.scope: Deactivated successfully.
Oct 12 16:57:23 np0005481680 systemd[1]: Reloading.
Oct 12 16:57:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:24.000+0000 7fbbdc20d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:57:24 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:57:24 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:57:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:24.081+0000 7fbbdc20d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:57:24 np0005481680 systemd[1]: Starting Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:57:24 np0005481680 python3[91094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:24 np0005481680 podman[91129]: 2025-10-12 20:57:24.497376147 +0000 UTC m=+0.071832190 container create 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:57:24 np0005481680 bash[91158]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 12 16:57:24 np0005481680 systemd[1]: Started libpod-conmon-37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143.scope.
Oct 12 16:57:24 np0005481680 podman[91129]: 2025-10-12 20:57:24.470241053 +0000 UTC m=+0.044697176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:24 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b82fa27ba9ecc741fe6e6fb7daef7022ea4fd5c2d04f4b70210f6d019566bb8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b82fa27ba9ecc741fe6e6fb7daef7022ea4fd5c2d04f4b70210f6d019566bb8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b82fa27ba9ecc741fe6e6fb7daef7022ea4fd5c2d04f4b70210f6d019566bb8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:24 np0005481680 podman[91129]: 2025-10-12 20:57:24.589792685 +0000 UTC m=+0.164248738 container init 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:57:24 np0005481680 podman[91129]: 2025-10-12 20:57:24.600904769 +0000 UTC m=+0.175360792 container start 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:24 np0005481680 podman[91129]: 2025-10-12 20:57:24.603642249 +0000 UTC m=+0.178098302 container attach 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:57:24 np0005481680 ceph-mon[73608]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 12 16:57:24 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1616969784' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 12 16:57:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:57:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:24.847+0000 7fbbdc20d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:24 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:57:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct 12 16:57:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/485783710' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 12 16:57:25 np0005481680 bash[91158]: Getting image source signatures
Oct 12 16:57:25 np0005481680 bash[91158]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 12 16:57:25 np0005481680 bash[91158]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 12 16:57:25 np0005481680 bash[91158]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:25.488+0000 7fbbdc20d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:57:25 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/485783710' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:25.642+0000 7fbbdc20d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:57:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/485783710' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 12 16:57:25 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.fmjeht(active, since 9s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:25 np0005481680 systemd[1]: libpod-37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143.scope: Deactivated successfully.
Oct 12 16:57:25 np0005481680 podman[91129]: 2025-10-12 20:57:25.692244144 +0000 UTC m=+1.266700177 container died 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:25.732+0000 7fbbdc20d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:57:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2b82fa27ba9ecc741fe6e6fb7daef7022ea4fd5c2d04f4b70210f6d019566bb8-merged.mount: Deactivated successfully.
Oct 12 16:57:25 np0005481680 bash[91158]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 12 16:57:25 np0005481680 bash[91158]: Writing manifest to image destination
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:25.869+0000 7fbbdc20d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:25 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:57:25 np0005481680 podman[91129]: 2025-10-12 20:57:25.878744791 +0000 UTC m=+1.453200824 container remove 37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143 (image=quay.io/ceph/ceph:v19, name=brave_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 16:57:25 np0005481680 systemd[1]: libpod-conmon-37abe1ca2bc2c12c68148d337e4965e46da6e3bb06dc02e79ccdb319637a7143.scope: Deactivated successfully.
Oct 12 16:57:25 np0005481680 podman[91158]: 2025-10-12 20:57:25.898541238 +0000 UTC m=+1.408309825 container create 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:57:25 np0005481680 podman[91158]: 2025-10-12 20:57:25.880741262 +0000 UTC m=+1.390509849 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 12 16:57:25 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9bff4cc76b0d227cf5515b2be51080bc17fb638634b1e2a5d1c12e2630d6d0/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:25 np0005481680 podman[91158]: 2025-10-12 20:57:25.951577636 +0000 UTC m=+1.461346213 container init 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:57:25 np0005481680 podman[91158]: 2025-10-12 20:57:25.956822981 +0000 UTC m=+1.466591568 container start 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:57:25 np0005481680 bash[91158]: 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.968Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.968Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.968Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.968Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.969Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.969Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 12 16:57:25 np0005481680 systemd[1]: Started Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=arp
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=bcache
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=bonding
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=cpu
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=dmi
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=edac
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=entropy
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=filefd
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=netclass
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=netdev
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=netstat
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=nfs
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=nvme
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=os
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=pressure
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=rapl
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=selinux
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=softnet
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=stat
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=textfile
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=time
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=uname
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=xfs
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.970Z caller=node_exporter.go:117 level=info collector=zfs
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.971Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 12 16:57:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[91274]: ts=2025-10-12T20:57:25.971Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 12 16:57:26 np0005481680 systemd[1]: session-35.scope: Deactivated successfully.
Oct 12 16:57:26 np0005481680 systemd[1]: session-35.scope: Consumed 6.448s CPU time.
Oct 12 16:57:26 np0005481680 systemd-logind[783]: Removed session 35.
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:57:26 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/485783710' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 12 16:57:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:26.762+0000 7fbbdc20d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:57:26 np0005481680 python3[91358]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:57:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:26.959+0000 7fbbdc20d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:26 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.027+0000 7fbbdc20d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.088+0000 7fbbdc20d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.157+0000 7fbbdc20d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:57:27 np0005481680 python3[91429]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302646.448349-33912-40850889559390/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.221+0000 7fbbdc20d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.538+0000 7fbbdc20d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:57:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:27.627+0000 7fbbdc20d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:57:27 np0005481680 python3[91479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:27 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:57:27 np0005481680 podman[91480]: 2025-10-12 20:57:27.880327671 +0000 UTC m=+0.061997250 container create 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 16:57:27 np0005481680 systemd[1]: Started libpod-conmon-1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a.scope.
Oct 12 16:57:27 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:27 np0005481680 podman[91480]: 2025-10-12 20:57:27.855424083 +0000 UTC m=+0.037093692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062eab89c56d6c3d5d440ffa11747bd1092dcc55765eb0f02066eff784c52841/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062eab89c56d6c3d5d440ffa11747bd1092dcc55765eb0f02066eff784c52841/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062eab89c56d6c3d5d440ffa11747bd1092dcc55765eb0f02066eff784c52841/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:27 np0005481680 podman[91480]: 2025-10-12 20:57:27.96890923 +0000 UTC m=+0.150578859 container init 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:57:27 np0005481680 podman[91480]: 2025-10-12 20:57:27.979633565 +0000 UTC m=+0.161303174 container start 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:57:27 np0005481680 podman[91480]: 2025-10-12 20:57:27.983278308 +0000 UTC m=+0.164947927 container attach 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.071+0000 7fbbdc20d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.632+0000 7fbbdc20d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.700+0000 7fbbdc20d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.774+0000 7fbbdc20d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.911+0000 7fbbdc20d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:57:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:28.977+0000 7fbbdc20d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:28 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.122+0000 7fbbdc20d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.326+0000 7fbbdc20d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.573+0000 7fbbdc20d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.640+0000 7fbbdc20d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x55aff10f5860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  1: '-n'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  2: 'mgr.compute-0.fmjeht'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  3: '-f'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  4: '--setuser'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  5: 'ceph'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  6: '--setgroup'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  7: 'ceph'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  8: '--default-log-to-file=false'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  9: '--default-log-to-journald=true'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr respawn  exe_path /proc/self/exe
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.fmjeht(active, starting, since 0.0296768s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setuser ceph since I am not root
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setgroup ceph since I am not root
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh restarted
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh started
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla restarted
Oct 12 16:57:29 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla started
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.906+0000 7f25ff136140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:57:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:29.985+0000 7f25ff136140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:57:29 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:57:30 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:57:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.fmjeht(active, starting, since 1.09557s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:30.760+0000 7f25ff136140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:30 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:57:30 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:31.392+0000 7f25ff136140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:31.560+0000 7f25ff136140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:31.631+0000 7f25ff136140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:57:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:31.769+0000 7f25ff136140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:57:31 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:57:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:32.710+0000 7f25ff136140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:57:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:32.906+0000 7f25ff136140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:57:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:32.975+0000 7f25ff136140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:57:32 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:57:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:33.036+0000 7f25ff136140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:57:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:33.106+0000 7f25ff136140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:57:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:33.171+0000 7f25ff136140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:57:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:33.500+0000 7f25ff136140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:57:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:33.589+0000 7f25ff136140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:57:33 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.023+0000 7f25ff136140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.558+0000 7f25ff136140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.626+0000 7f25ff136140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:57:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.700+0000 7f25ff136140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.836+0000 7f25ff136140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:57:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:34.901+0000 7f25ff136140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:57:34 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:57:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:35.046+0000 7f25ff136140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:57:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:35.251+0000 7f25ff136140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:57:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:35.492+0000 7f25ff136140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:57:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:57:35.563+0000 7f25ff136140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x55fb7e30d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.fmjeht(active, starting, since 0.0336915s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e1 all = 1
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmjeht is now available
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:57:35
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: cephadm
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: dashboard
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO sso] Loading SSO DB version=1
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f257f499e50>, <progress.module.GhostEvent object at 0x7f257f499e80>, <progress.module.GhostEvent object at 0x7f257f499eb0>, <progress.module.GhostEvent object at 0x7f257f499ee0>, <progress.module.GhostEvent object at 0x7f257f499f10>, <progress.module.GhostEvent object at 0x7f257f499f40>, <progress.module.GhostEvent object at 0x7f257f499f70>, <progress.module.GhostEvent object at 0x7f257f499fa0>, <progress.module.GhostEvent object at 0x7f257f499fd0>, <progress.module.GhostEvent object at 0x7f257f4a6040>] historic events
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"} v 0)
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh restarted
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh started
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: Active manager daemon compute-0.fmjeht restarted
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: Manager daemon compute-0.fmjeht is now available
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla restarted
Oct 12 16:57:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla started
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 12 16:57:35 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 12 16:57:36 np0005481680 systemd[1]: Stopping User Manager for UID 42477...
Oct 12 16:57:36 np0005481680 systemd[74946]: Activating special unit Exit the Session...
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped target Main User Target.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped target Basic System.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped target Paths.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped target Sockets.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped target Timers.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 12 16:57:36 np0005481680 systemd[74946]: Closed D-Bus User Message Bus Socket.
Oct 12 16:57:36 np0005481680 systemd[74946]: Stopped Create User's Volatile Files and Directories.
Oct 12 16:57:36 np0005481680 systemd[74946]: Removed slice User Application Slice.
Oct 12 16:57:36 np0005481680 systemd[74946]: Reached target Shutdown.
Oct 12 16:57:36 np0005481680 systemd[74946]: Finished Exit the Session.
Oct 12 16:57:36 np0005481680 systemd[74946]: Reached target Exit the Session.
Oct 12 16:57:36 np0005481680 systemd[1]: user@42477.service: Deactivated successfully.
Oct 12 16:57:36 np0005481680 systemd[1]: Stopped User Manager for UID 42477.
Oct 12 16:57:36 np0005481680 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 12 16:57:36 np0005481680 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 12 16:57:36 np0005481680 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 12 16:57:36 np0005481680 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 12 16:57:36 np0005481680 systemd[1]: Removed slice User Slice of UID 42477.
Oct 12 16:57:36 np0005481680 systemd[1]: user-42477.slice: Consumed 40.182s CPU time.
Oct 12 16:57:36 np0005481680 systemd[1]: Created slice User Slice of UID 42477.
Oct 12 16:57:36 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 12 16:57:36 np0005481680 systemd-logind[783]: New session 36 of user ceph-admin.
Oct 12 16:57:36 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 12 16:57:36 np0005481680 systemd[1]: Starting User Manager for UID 42477...
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.module] Engine started.
Oct 12 16:57:36 np0005481680 systemd[91683]: Queued start job for default target Main User Target.
Oct 12 16:57:36 np0005481680 systemd[91683]: Created slice User Application Slice.
Oct 12 16:57:36 np0005481680 systemd[91683]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 12 16:57:36 np0005481680 systemd[91683]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 16:57:36 np0005481680 systemd[91683]: Reached target Paths.
Oct 12 16:57:36 np0005481680 systemd[91683]: Reached target Timers.
Oct 12 16:57:36 np0005481680 systemd[91683]: Starting D-Bus User Message Bus Socket...
Oct 12 16:57:36 np0005481680 systemd[91683]: Starting Create User's Volatile Files and Directories...
Oct 12 16:57:36 np0005481680 systemd[91683]: Finished Create User's Volatile Files and Directories.
Oct 12 16:57:36 np0005481680 systemd[91683]: Listening on D-Bus User Message Bus Socket.
Oct 12 16:57:36 np0005481680 systemd[91683]: Reached target Sockets.
Oct 12 16:57:36 np0005481680 systemd[91683]: Reached target Basic System.
Oct 12 16:57:36 np0005481680 systemd[91683]: Reached target Main User Target.
Oct 12 16:57:36 np0005481680 systemd[91683]: Startup finished in 173ms.
Oct 12 16:57:36 np0005481680 systemd[1]: Started User Manager for UID 42477.
Oct 12 16:57:36 np0005481680 systemd[1]: Started Session 36 of User ceph-admin.
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.fmjeht(active, since 1.06082s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 12 16:57:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0[73604]: 2025-10-12T20:57:36.641+0000 7f3542c8a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e2 new map
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-10-12T20:57:36:642338+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:57:36.642285+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:36 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 12 16:57:36 np0005481680 systemd[1]: libpod-1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a.scope: Deactivated successfully.
Oct 12 16:57:36 np0005481680 podman[91480]: 2025-10-12 20:57:36.68662377 +0000 UTC m=+8.868293349 container died 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-062eab89c56d6c3d5d440ffa11747bd1092dcc55765eb0f02066eff784c52841-merged.mount: Deactivated successfully.
Oct 12 16:57:36 np0005481680 podman[91480]: 2025-10-12 20:57:36.747054408 +0000 UTC m=+8.928724017 container remove 1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a (image=quay.io/ceph/ceph:v19, name=blissful_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 16:57:36 np0005481680 systemd[1]: libpod-conmon-1cefe5520ed0398b14e7e4d2ea5d325d7e954d1511c51d061864da76fdde690a.scope: Deactivated successfully.
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:36 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:37] ENGINE Bus STARTING
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:37] ENGINE Bus STARTING
Oct 12 16:57:37 np0005481680 python3[91826]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:37] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:37] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.211947656 +0000 UTC m=+0.070546668 container create 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 16:57:37 np0005481680 systemd[1]: Started libpod-conmon-676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3.scope.
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.18048398 +0000 UTC m=+0.039083022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7576d062cb5ab5a03a4a14d1a3d439c9d2bfc35ca39fe6d435273de740f5bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7576d062cb5ab5a03a4a14d1a3d439c9d2bfc35ca39fe6d435273de740f5bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7576d062cb5ab5a03a4a14d1a3d439c9d2bfc35ca39fe6d435273de740f5bb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:37] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:37] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:37] ENGINE Bus STARTED
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:37] ENGINE Bus STARTED
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:57:37] ENGINE Client ('192.168.122.100', 38784) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:57:37] ENGINE Client ('192.168.122.100', 38784) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.323229867 +0000 UTC m=+0.181828909 container init 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.332148225 +0000 UTC m=+0.190747217 container start 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.335506252 +0000 UTC m=+0.194105324 container attach 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:37 np0005481680 podman[91899]: 2025-10-12 20:57:37.357500044 +0000 UTC m=+0.081380025 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:37 np0005481680 podman[91899]: 2025-10-12 20:57:37.468845087 +0000 UTC m=+0.192725018 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v5: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:37 np0005481680 nifty_franklin[91901]: Scheduled mds.cephfs update...
Oct 12 16:57:37 np0005481680 systemd[1]: libpod-676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3.scope: Deactivated successfully.
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.757982182 +0000 UTC m=+0.616581184 container died 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:37 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 16:57:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6a7576d062cb5ab5a03a4a14d1a3d439c9d2bfc35ca39fe6d435273de740f5bb-merged.mount: Deactivated successfully.
Oct 12 16:57:37 np0005481680 podman[91851]: 2025-10-12 20:57:37.804474344 +0000 UTC m=+0.663073346 container remove 676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3 (image=quay.io/ceph/ceph:v19, name=nifty_franklin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:37 np0005481680 systemd[1]: libpod-conmon-676f7342ad86c1ef2622595ab622dc008dec422c2624722713e20270ed42dec3.scope: Deactivated successfully.
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 podman[92064]: 2025-10-12 20:57:38.042563413 +0000 UTC m=+0.070716593 container exec 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:57:38 np0005481680 podman[92064]: 2025-10-12 20:57:38.077719783 +0000 UTC m=+0.105872973 container exec_died 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:57:38 np0005481680 python3[92114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:38 np0005481680 podman[92125]: 2025-10-12 20:57:38.248504088 +0000 UTC m=+0.052146768 container create 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 systemd[1]: Started libpod-conmon-423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00.scope.
Oct 12 16:57:38 np0005481680 podman[92125]: 2025-10-12 20:57:38.225984291 +0000 UTC m=+0.029627021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd979c0ef588e53e9596ef843d13d8df3dbfa94c67b49b758cd195d8aa1529dd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd979c0ef588e53e9596ef843d13d8df3dbfa94c67b49b758cd195d8aa1529dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd979c0ef588e53e9596ef843d13d8df3dbfa94c67b49b758cd195d8aa1529dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:38 np0005481680 podman[92125]: 2025-10-12 20:57:38.365205416 +0000 UTC m=+0.168848096 container init 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 16:57:38 np0005481680 podman[92125]: 2025-10-12 20:57:38.373089659 +0000 UTC m=+0.176732339 container start 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:38 np0005481680 podman[92125]: 2025-10-12 20:57:38.376361532 +0000 UTC m=+0.180004212 container attach 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:37] ENGINE Bus STARTING
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:37] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:37] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:37] ENGINE Bus STARTED
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:57:37] ENGINE Client ('192.168.122.100', 38784) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.fmjeht(active, since 2s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-1 to 134237798: error parsing value: Value '134237798' is below minimum 939524096
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:57:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v6: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct 12 16:57:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 37 pg[8.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.fmjeht(active, since 5s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: Unable to set osd_memory_target on compute-2 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 12 16:57:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 38 pg[8.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:57:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:40 np0005481680 systemd[1]: libpod-423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00.scope: Deactivated successfully.
Oct 12 16:57:40 np0005481680 podman[92125]: 2025-10-12 20:57:40.863807648 +0000 UTC m=+2.667450378 container died 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 16:57:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fd979c0ef588e53e9596ef843d13d8df3dbfa94c67b49b758cd195d8aa1529dd-merged.mount: Deactivated successfully.
Oct 12 16:57:40 np0005481680 podman[92125]: 2025-10-12 20:57:40.92402269 +0000 UTC m=+2.727665410 container remove 423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00 (image=quay.io/ceph/ceph:v19, name=musing_shamir, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 16:57:40 np0005481680 systemd[1]: libpod-conmon-423620723aa538252d56831908804d8f6eb5c463683fe3ef0f34cf0729a7ae00.scope: Deactivated successfully.
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:40 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v9: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:41 np0005481680 python3[93059]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.fmjeht(active, since 6s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:57:42 np0005481680 python3[93282]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760302661.3705719-33947-62620565550650/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=66d743d6767b50dcfc22a4999c89f03e91ed32ed backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:42 np0005481680 python3[93432]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev b357f477-ea8b-4233-933d-671d41376d44 (Updating node-exporter deployment (+2 -> 3))
Oct 12 16:57:42 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct 12 16:57:42 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct 12 16:57:42 np0005481680 podman[93433]: 2025-10-12 20:57:42.649480607 +0000 UTC m=+0.051891120 container create 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:57:42 np0005481680 systemd[1]: Started libpod-conmon-9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf.scope.
Oct 12 16:57:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3594b09780cc1ea9f20eee24cfed85ee6d9b98937d48a267dda91a4777278255/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3594b09780cc1ea9f20eee24cfed85ee6d9b98937d48a267dda91a4777278255/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:42 np0005481680 podman[93433]: 2025-10-12 20:57:42.634853232 +0000 UTC m=+0.037263765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:42 np0005481680 podman[93433]: 2025-10-12 20:57:42.730092102 +0000 UTC m=+0.132502645 container init 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 16:57:42 np0005481680 podman[93433]: 2025-10-12 20:57:42.740422446 +0000 UTC m=+0.142832999 container start 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:57:42 np0005481680 podman[93433]: 2025-10-12 20:57:42.744453399 +0000 UTC m=+0.146863912 container attach 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:42 np0005481680 ceph-mon[73608]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 12 16:57:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct 12 16:57:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3079341514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 12 16:57:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3079341514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 12 16:57:43 np0005481680 systemd[1]: libpod-9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf.scope: Deactivated successfully.
Oct 12 16:57:43 np0005481680 conmon[93448]: conmon 9fb89095bf3c8927fbcd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf.scope/container/memory.events
Oct 12 16:57:43 np0005481680 podman[93433]: 2025-10-12 20:57:43.186298917 +0000 UTC m=+0.588709430 container died 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 16:57:43 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3594b09780cc1ea9f20eee24cfed85ee6d9b98937d48a267dda91a4777278255-merged.mount: Deactivated successfully.
Oct 12 16:57:43 np0005481680 podman[93433]: 2025-10-12 20:57:43.225751098 +0000 UTC m=+0.628161611 container remove 9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf (image=quay.io/ceph/ceph:v19, name=dreamy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 16:57:43 np0005481680 systemd[1]: libpod-conmon-9fb89095bf3c8927fbcd9fc418d12a57407e1cedf156f025c47db5b35eb3adcf.scope: Deactivated successfully.
Oct 12 16:57:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v11: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 12 16:57:43 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3079341514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 12 16:57:43 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/3079341514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 12 16:57:44 np0005481680 python3[93511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.127936167 +0000 UTC m=+0.051172592 container create 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 16:57:44 np0005481680 systemd[1]: Started libpod-conmon-8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980.scope.
Oct 12 16:57:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d8444b9ccbd6d67de173ae826a49c435cfc934bd2b30c3ae4d94bd0927c1ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d8444b9ccbd6d67de173ae826a49c435cfc934bd2b30c3ae4d94bd0927c1ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.103594174 +0000 UTC m=+0.026830689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.211340803 +0000 UTC m=+0.134577258 container init 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.224244294 +0000 UTC m=+0.147480759 container start 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.23112046 +0000 UTC m=+0.154356915 container attach 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 12 16:57:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177819039' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 12 16:57:44 np0005481680 youthful_franklin[93529]: 
Oct 12 16:57:44 np0005481680 youthful_franklin[93529]: {"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":73,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1760302614,"num_in_osds":3,"osd_in_since":1760302596,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84156416,"bytes_avail":64327770112,"bytes_total":64411926528,"read_bytes_sec":30030,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-10-12T20:57:36:642338+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-10-12T20:57:00.493331+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.orllvh":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.iamnla":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b357f477-ea8b-4233-933d-671d41376d44":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 12 16:57:44 np0005481680 systemd[1]: libpod-8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980.scope: Deactivated successfully.
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.682623095 +0000 UTC m=+0.605859550 container died 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-79d8444b9ccbd6d67de173ae826a49c435cfc934bd2b30c3ae4d94bd0927c1ad-merged.mount: Deactivated successfully.
Oct 12 16:57:44 np0005481680 podman[93513]: 2025-10-12 20:57:44.738141717 +0000 UTC m=+0.661378182 container remove 8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980 (image=quay.io/ceph/ceph:v19, name=youthful_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:44 np0005481680 systemd[1]: libpod-conmon-8c1450c5429dc8b23f82d4eacfdab3a1a02ab569e2ec34714dab487c57dfb980.scope: Deactivated successfully.
Oct 12 16:57:45 np0005481680 python3[93593]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.115666797 +0000 UTC m=+0.042014787 container create 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 16:57:45 np0005481680 systemd[1]: Started libpod-conmon-77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421.scope.
Oct 12 16:57:45 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12cb28dd65c56d7ce2714a454133bd4d7a95525c15be0cd0be06d261eb79ac1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12cb28dd65c56d7ce2714a454133bd4d7a95525c15be0cd0be06d261eb79ac1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.094799102 +0000 UTC m=+0.021147112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.196621121 +0000 UTC m=+0.122969181 container init 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.202681616 +0000 UTC m=+0.129029636 container start 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.206281808 +0000 UTC m=+0.132629868 container attach 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:45 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct 12 16:57:45 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 16:57:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221267582' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 16:57:45 np0005481680 intelligent_mendeleev[93609]: 
Oct 12 16:57:45 np0005481680 intelligent_mendeleev[93609]: {"epoch":3,"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","modified":"2025-10-12T20:56:25.747024Z","created":"2025-10-12T20:54:15.161334Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct 12 16:57:45 np0005481680 intelligent_mendeleev[93609]: dumped monmap epoch 3
Oct 12 16:57:45 np0005481680 systemd[1]: libpod-77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421.scope: Deactivated successfully.
Oct 12 16:57:45 np0005481680 conmon[93609]: conmon 77817d0b33f9e6660d12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421.scope/container/memory.events
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.605121454 +0000 UTC m=+0.531469464 container died 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v12: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 12 16:57:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e12cb28dd65c56d7ce2714a454133bd4d7a95525c15be0cd0be06d261eb79ac1-merged.mount: Deactivated successfully.
Oct 12 16:57:45 np0005481680 podman[93594]: 2025-10-12 20:57:45.652472287 +0000 UTC m=+0.578820317 container remove 77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421 (image=quay.io/ceph/ceph:v19, name=intelligent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:57:45 np0005481680 systemd[1]: libpod-conmon-77817d0b33f9e6660d126148ba0c7c514558823f2ab55069a72cff6a22a73421.scope: Deactivated successfully.
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 12 16:57:46 np0005481680 python3[93673]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:46 np0005481680 podman[93674]: 2025-10-12 20:57:46.439377024 +0000 UTC m=+0.065154341 container create 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:46 np0005481680 systemd[1]: Started libpod-conmon-094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362.scope.
Oct 12 16:57:46 np0005481680 podman[93674]: 2025-10-12 20:57:46.409548179 +0000 UTC m=+0.035325516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05ddee09063f707c3e0ec13f407b78c9922725876ce19cba5d0ae7cde34f8ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05ddee09063f707c3e0ec13f407b78c9922725876ce19cba5d0ae7cde34f8ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:46 np0005481680 podman[93674]: 2025-10-12 20:57:46.530735644 +0000 UTC m=+0.156512961 container init 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:57:46 np0005481680 podman[93674]: 2025-10-12 20:57:46.540030482 +0000 UTC m=+0.165807809 container start 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:46 np0005481680 podman[93674]: 2025-10-12 20:57:46.544817444 +0000 UTC m=+0.170594751 container attach 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct 12 16:57:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1510125274' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 12 16:57:46 np0005481680 sweet_golick[93690]: [client.openstack]
Oct 12 16:57:46 np0005481680 sweet_golick[93690]: #011key = AQBTFexoAAAAABAAjoav7tTHlB45tASOkOqA2A==
Oct 12 16:57:46 np0005481680 sweet_golick[93690]: #011caps mgr = "allow *"
Oct 12 16:57:46 np0005481680 sweet_golick[93690]: #011caps mon = "profile rbd"
Oct 12 16:57:46 np0005481680 sweet_golick[93690]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 12 16:57:47 np0005481680 systemd[1]: libpod-094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362.scope: Deactivated successfully.
Oct 12 16:57:47 np0005481680 podman[93674]: 2025-10-12 20:57:47.006648704 +0000 UTC m=+0.632425991 container died 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 16:57:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c05ddee09063f707c3e0ec13f407b78c9922725876ce19cba5d0ae7cde34f8ee-merged.mount: Deactivated successfully.
Oct 12 16:57:47 np0005481680 podman[93674]: 2025-10-12 20:57:47.0435821 +0000 UTC m=+0.669359387 container remove 094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362 (image=quay.io/ceph/ceph:v19, name=sweet_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 16:57:47 np0005481680 systemd[1]: libpod-conmon-094fce51c06f205986cf8c121d79ba3ba91b45f3dcea3b56a887f6ae95e5b362.scope: Deactivated successfully.
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/1510125274' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 12 16:57:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v13: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:47 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev b357f477-ea8b-4233-933d-671d41376d44 (Updating node-exporter deployment (+2 -> 3))
Oct 12 16:57:47 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event b357f477-ea8b-4233-933d-671d41376d44 (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.430229209 +0000 UTC m=+0.052012074 container create 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:48 np0005481680 systemd[1]: Started libpod-conmon-5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5.scope.
Oct 12 16:57:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.414382292 +0000 UTC m=+0.036165187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.512269119 +0000 UTC m=+0.134052004 container init 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.522915723 +0000 UTC m=+0.144698628 container start 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:48 np0005481680 wizardly_hodgkin[93956]: 167 167
Oct 12 16:57:48 np0005481680 systemd[1]: libpod-5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5.scope: Deactivated successfully.
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.529228014 +0000 UTC m=+0.151010899 container attach 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.530337482 +0000 UTC m=+0.152120407 container died 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c902ef3edef95a7cc74f8a20d3df341f35ec9442fc7390e35660b1bdf672a46d-merged.mount: Deactivated successfully.
Oct 12 16:57:48 np0005481680 podman[93905]: 2025-10-12 20:57:48.57787916 +0000 UTC m=+0.199662035 container remove 5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Oct 12 16:57:48 np0005481680 systemd[1]: libpod-conmon-5a74293774114d75bcbe990ae61488e60b995e9458bcf99ee4ac366b4e845be5.scope: Deactivated successfully.
Oct 12 16:57:48 np0005481680 ansible-async_wrapper.py[94001]: Invoked with j695686210551 30 /home/zuul/.ansible/tmp/ansible-tmp-1760302668.12417-34019-22854257372276/AnsiballZ_command.py _
Oct 12 16:57:48 np0005481680 ansible-async_wrapper.py[94020]: Starting module and watcher
Oct 12 16:57:48 np0005481680 ansible-async_wrapper.py[94020]: Start watching 94021 (30)
Oct 12 16:57:48 np0005481680 ansible-async_wrapper.py[94021]: Start module (94021)
Oct 12 16:57:48 np0005481680 ansible-async_wrapper.py[94001]: Return async_wrapper task started.
Oct 12 16:57:48 np0005481680 podman[94007]: 2025-10-12 20:57:48.776529139 +0000 UTC m=+0.061646550 container create d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 16:57:48 np0005481680 systemd[1]: Started libpod-conmon-d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9.scope.
Oct 12 16:57:48 np0005481680 podman[94007]: 2025-10-12 20:57:48.755927781 +0000 UTC m=+0.041045222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:48 np0005481680 podman[94007]: 2025-10-12 20:57:48.876957381 +0000 UTC m=+0.162074782 container init d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:48 np0005481680 python3[94024]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:48 np0005481680 podman[94007]: 2025-10-12 20:57:48.889313518 +0000 UTC m=+0.174430929 container start d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:57:48 np0005481680 podman[94007]: 2025-10-12 20:57:48.892807487 +0000 UTC m=+0.177924898 container attach d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 16:57:48 np0005481680 podman[94032]: 2025-10-12 20:57:48.95969353 +0000 UTC m=+0.055118073 container create 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 16:57:48 np0005481680 systemd[1]: Started libpod-conmon-6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d.scope.
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:48.932285848 +0000 UTC m=+0.027710451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3724eb78bac7872efa8ab561a77514550a54010942648f8a3cb3da64e507431b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3724eb78bac7872efa8ab561a77514550a54010942648f8a3cb3da64e507431b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:49.05103181 +0000 UTC m=+0.146456373 container init 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:49.060446211 +0000 UTC m=+0.155870774 container start 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:49.065345636 +0000 UTC m=+0.160770169 container attach 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:49 np0005481680 awesome_darwin[94028]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:57:49 np0005481680 awesome_darwin[94028]: --> All data devices are unavailable
Oct 12 16:57:49 np0005481680 systemd[1]: libpod-d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9.scope: Deactivated successfully.
Oct 12 16:57:49 np0005481680 podman[94007]: 2025-10-12 20:57:49.302049449 +0000 UTC m=+0.587166870 container died d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:57:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e37f38c21eab5e89843f01a22493622f7630ede1611784709e278ec29e56d9fc-merged.mount: Deactivated successfully.
Oct 12 16:57:49 np0005481680 podman[94007]: 2025-10-12 20:57:49.361522083 +0000 UTC m=+0.646639494 container remove d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 16:57:49 np0005481680 systemd[1]: libpod-conmon-d202b286a12d8f6ee6c4613a32735a52c415351a9e204c48c9afbf23069296d9.scope: Deactivated successfully.
Oct 12 16:57:49 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:57:49 np0005481680 agitated_keldysh[94048]: 
Oct 12 16:57:49 np0005481680 agitated_keldysh[94048]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 12 16:57:49 np0005481680 systemd[1]: libpod-6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d.scope: Deactivated successfully.
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:49.443180124 +0000 UTC m=+0.538604647 container died 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3724eb78bac7872efa8ab561a77514550a54010942648f8a3cb3da64e507431b-merged.mount: Deactivated successfully.
Oct 12 16:57:49 np0005481680 podman[94032]: 2025-10-12 20:57:49.48711851 +0000 UTC m=+0.582543033 container remove 6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d (image=quay.io/ceph/ceph:v19, name=agitated_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:57:49 np0005481680 systemd[1]: libpod-conmon-6aefd142a1e8f12024a2cbb1260af24080b8ce42f8aa2bc5703839cf9629f32d.scope: Deactivated successfully.
Oct 12 16:57:49 np0005481680 ansible-async_wrapper.py[94021]: Module complete (94021)
Oct 12 16:57:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v14: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct 12 16:57:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.010607729 +0000 UTC m=+0.064007540 container create 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:50 np0005481680 systemd[1]: Started libpod-conmon-655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef.scope.
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:49.986687636 +0000 UTC m=+0.040087457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.113248208 +0000 UTC m=+0.166648039 container init 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.126421195 +0000 UTC m=+0.179821006 container start 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:50 np0005481680 python3[94253]: ansible-ansible.legacy.async_status Invoked with jid=j695686210551.94001 mode=status _async_dir=/root/.ansible_async
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.130689925 +0000 UTC m=+0.184089796 container attach 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:50 np0005481680 quizzical_chebyshev[94261]: 167 167
Oct 12 16:57:50 np0005481680 systemd[1]: libpod-655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef.scope: Deactivated successfully.
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.137028437 +0000 UTC m=+0.190428258 container died 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b2e3b50153bbb041ab6dde26e84701753fcd841dd802cce421775867dc1b840c-merged.mount: Deactivated successfully.
Oct 12 16:57:50 np0005481680 podman[94235]: 2025-10-12 20:57:50.181575698 +0000 UTC m=+0.234975479 container remove 655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:50 np0005481680 systemd[1]: libpod-conmon-655bd7dc9df2f2a43de175d7f5dbe8a85d8523768dcfb56764356c85729c76ef.scope: Deactivated successfully.
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.410239636 +0000 UTC m=+0.055430612 container create 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 16:57:50 np0005481680 systemd[1]: Started libpod-conmon-53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea.scope.
Oct 12 16:57:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb0dcd0243136c661c38f480588017d4bea0ca58fd834278575604c02ae6150/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb0dcd0243136c661c38f480588017d4bea0ca58fd834278575604c02ae6150/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb0dcd0243136c661c38f480588017d4bea0ca58fd834278575604c02ae6150/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb0dcd0243136c661c38f480588017d4bea0ca58fd834278575604c02ae6150/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.391085485 +0000 UTC m=+0.036276451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:50 np0005481680 python3[94330]: ansible-ansible.legacy.async_status Invoked with jid=j695686210551.94001 mode=cleanup _async_dir=/root/.ansible_async
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.514345962 +0000 UTC m=+0.159536958 container init 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.5224739 +0000 UTC m=+0.167664866 container start 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.525955089 +0000 UTC m=+0.171146045 container attach 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 16:57:50 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 11 completed events
Oct 12 16:57:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:57:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:50 np0005481680 exciting_golick[94350]: {
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:    "0": [
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:        {
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "devices": [
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "/dev/loop3"
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            ],
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "lv_name": "ceph_lv0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "lv_size": "21470642176",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "name": "ceph_lv0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "tags": {
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.cluster_name": "ceph",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.crush_device_class": "",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.encrypted": "0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.osd_id": "0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.type": "block",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.vdo": "0",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:                "ceph.with_tpm": "0"
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            },
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "type": "block",
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:            "vg_name": "ceph_vg0"
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:        }
Oct 12 16:57:50 np0005481680 exciting_golick[94350]:    ]
Oct 12 16:57:50 np0005481680 exciting_golick[94350]: }
Oct 12 16:57:50 np0005481680 systemd[1]: libpod-53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea.scope: Deactivated successfully.
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.856878946 +0000 UTC m=+0.502069972 container died 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:57:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9cb0dcd0243136c661c38f480588017d4bea0ca58fd834278575604c02ae6150-merged.mount: Deactivated successfully.
Oct 12 16:57:50 np0005481680 podman[94333]: 2025-10-12 20:57:50.962275725 +0000 UTC m=+0.607466681 container remove 53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:50 np0005481680 systemd[1]: libpod-conmon-53344ebffe2639142a2a60d8b088567a8d9362af4085ef84311da68406fc7eea.scope: Deactivated successfully.
Oct 12 16:57:51 np0005481680 python3[94419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.347277317 +0000 UTC m=+0.096710648 container create af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:51 np0005481680 systemd[1]: Started libpod-conmon-af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567.scope.
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.313093622 +0000 UTC m=+0.062527013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c1ff58cfc0ec25d361bee488c7f123889db6595de77b4f2517e2f78ca0407/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c1ff58cfc0ec25d361bee488c7f123889db6595de77b4f2517e2f78ca0407/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.440042363 +0000 UTC m=+0.189475774 container init af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.452275607 +0000 UTC m=+0.201708928 container start af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.456489425 +0000 UTC m=+0.205922816 container attach af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 16:57:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v15: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.679337313 +0000 UTC m=+0.044877111 container create 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 16:57:51 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:51 np0005481680 systemd[1]: Started libpod-conmon-687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218.scope.
Oct 12 16:57:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.662245685 +0000 UTC m=+0.027785533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:51 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.786133128 +0000 UTC m=+0.151672936 container init 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.790938702 +0000 UTC m=+0.156478460 container start 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.794230115 +0000 UTC m=+0.159769903 container attach 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:57:51 np0005481680 elegant_shamir[94533]: 167 167
Oct 12 16:57:51 np0005481680 systemd[1]: libpod-687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218.scope: Deactivated successfully.
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.797792337 +0000 UTC m=+0.163332115 container died 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 12 16:57:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-da34dedeb30d9774b3abe0da8e6c6b296174901be873e7f010c18058fa4a2177-merged.mount: Deactivated successfully.
Oct 12 16:57:51 np0005481680 podman[94517]: 2025-10-12 20:57:51.835338489 +0000 UTC m=+0.200878257 container remove 687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 12 16:57:51 np0005481680 systemd[1]: libpod-conmon-687e86b5d204ffc7494d9e12f5b81d619d6c509b700dda0f4b31a083cd69d218.scope: Deactivated successfully.
Oct 12 16:57:51 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:57:51 np0005481680 naughty_solomon[94460]: 
Oct 12 16:57:51 np0005481680 naughty_solomon[94460]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 12 16:57:51 np0005481680 systemd[1]: libpod-af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567.scope: Deactivated successfully.
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.921973868 +0000 UTC m=+0.671407169 container died af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-447c1ff58cfc0ec25d361bee488c7f123889db6595de77b4f2517e2f78ca0407-merged.mount: Deactivated successfully.
Oct 12 16:57:51 np0005481680 podman[94447]: 2025-10-12 20:57:51.971301012 +0000 UTC m=+0.720734323 container remove af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567 (image=quay.io/ceph/ceph:v19, name=naughty_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 16:57:51 np0005481680 systemd[1]: libpod-conmon-af29d43b50195289a9f8532a96f982642ba2cb65262ddd75a3c375ce42fe5567.scope: Deactivated successfully.
Oct 12 16:57:52 np0005481680 podman[94570]: 2025-10-12 20:57:52.065684079 +0000 UTC m=+0.073773171 container create 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:57:52 np0005481680 systemd[1]: Started libpod-conmon-1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023.scope.
Oct 12 16:57:52 np0005481680 podman[94570]: 2025-10-12 20:57:52.036421499 +0000 UTC m=+0.044510641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2433f500cc039261582a03b4609de3b8e8ec89ee7dc49745b2f44a355b1935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2433f500cc039261582a03b4609de3b8e8ec89ee7dc49745b2f44a355b1935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2433f500cc039261582a03b4609de3b8e8ec89ee7dc49745b2f44a355b1935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2433f500cc039261582a03b4609de3b8e8ec89ee7dc49745b2f44a355b1935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:52 np0005481680 podman[94570]: 2025-10-12 20:57:52.17855001 +0000 UTC m=+0.186639142 container init 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 16:57:52 np0005481680 podman[94570]: 2025-10-12 20:57:52.193946015 +0000 UTC m=+0.202035107 container start 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:52 np0005481680 podman[94570]: 2025-10-12 20:57:52.197574977 +0000 UTC m=+0.205664089 container attach 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Oct 12 16:57:52 np0005481680 python3[94669]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:52 np0005481680 podman[94682]: 2025-10-12 20:57:52.98783806 +0000 UTC m=+0.052015933 container create 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 16:57:53 np0005481680 lvm[94699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:57:53 np0005481680 lvm[94699]: VG ceph_vg0 finished
Oct 12 16:57:53 np0005481680 systemd[1]: Started libpod-conmon-0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07.scope.
Oct 12 16:57:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06add830a85f841b44981697dcead7ae9a0cf4452a7cfaaa03e064a27a27c31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06add830a85f841b44981697dcead7ae9a0cf4452a7cfaaa03e064a27a27c31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:53 np0005481680 podman[94682]: 2025-10-12 20:57:53.063446106 +0000 UTC m=+0.127623989 container init 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:53 np0005481680 podman[94682]: 2025-10-12 20:57:52.969875259 +0000 UTC m=+0.034053162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:53 np0005481680 amazing_moser[94586]: {}
Oct 12 16:57:53 np0005481680 podman[94682]: 2025-10-12 20:57:53.075030933 +0000 UTC m=+0.139208796 container start 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Oct 12 16:57:53 np0005481680 podman[94682]: 2025-10-12 20:57:53.078833401 +0000 UTC m=+0.143011264 container attach 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 16:57:53 np0005481680 systemd[1]: libpod-1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023.scope: Deactivated successfully.
Oct 12 16:57:53 np0005481680 systemd[1]: libpod-1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023.scope: Consumed 1.572s CPU time.
Oct 12 16:57:53 np0005481680 podman[94570]: 2025-10-12 20:57:53.096154264 +0000 UTC m=+1.104243416 container died 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:57:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2e2433f500cc039261582a03b4609de3b8e8ec89ee7dc49745b2f44a355b1935-merged.mount: Deactivated successfully.
Oct 12 16:57:53 np0005481680 podman[94570]: 2025-10-12 20:57:53.13585299 +0000 UTC m=+1.143942042 container remove 1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:57:53 np0005481680 systemd[1]: libpod-conmon-1c07510d8934911395f839c9d51be80177d0defcc11c3373bb700a7a91373023.scope: Deactivated successfully.
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:53 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 91a5563f-6b04-4bfe-8c40-3180eb6c0d27 (Updating rgw.rgw deployment (+3 -> 3))
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.lonmvq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.lonmvq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.lonmvq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:53 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.lonmvq on compute-2
Oct 12 16:57:53 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.lonmvq on compute-2
Oct 12 16:57:53 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:57:53 np0005481680 vibrant_swartz[94704]: 
Oct 12 16:57:53 np0005481680 vibrant_swartz[94704]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 12 16:57:53 np0005481680 systemd[1]: libpod-0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07.scope: Deactivated successfully.
Oct 12 16:57:53 np0005481680 podman[94742]: 2025-10-12 20:57:53.508962808 +0000 UTC m=+0.038074636 container died 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 16:57:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d06add830a85f841b44981697dcead7ae9a0cf4452a7cfaaa03e064a27a27c31-merged.mount: Deactivated successfully.
Oct 12 16:57:53 np0005481680 podman[94742]: 2025-10-12 20:57:53.553010536 +0000 UTC m=+0.082122344 container remove 0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07 (image=quay.io/ceph/ceph:v19, name=vibrant_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:53 np0005481680 systemd[1]: libpod-conmon-0a1fc5caca12e55929c87a5ebfa7f406746359896f107ea26acb4546ab375b07.scope: Deactivated successfully.
Oct 12 16:57:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v16: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:53 np0005481680 ansible-async_wrapper.py[94020]: Done in kid B.
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.lonmvq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.lonmvq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:53 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:54 np0005481680 python3[94782]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:54 np0005481680 ceph-mon[73608]: Deploying daemon rgw.rgw.compute-2.lonmvq on compute-2
Oct 12 16:57:54 np0005481680 podman[94783]: 2025-10-12 20:57:54.783667506 +0000 UTC m=+0.057929730 container create 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:57:54 np0005481680 systemd[1]: Started libpod-conmon-3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f.scope.
Oct 12 16:57:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:54 np0005481680 podman[94783]: 2025-10-12 20:57:54.763557987 +0000 UTC m=+0.037820261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f5ecc44c36499deb07996de15225ee8b4b76ad090b56c48789238ac8155a79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f5ecc44c36499deb07996de15225ee8b4b76ad090b56c48789238ac8155a79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:54 np0005481680 podman[94783]: 2025-10-12 20:57:54.889233546 +0000 UTC m=+0.163495770 container init 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 16:57:54 np0005481680 podman[94783]: 2025-10-12 20:57:54.900235043 +0000 UTC m=+0.174497277 container start 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 16:57:54 np0005481680 podman[94783]: 2025-10-12 20:57:54.905173403 +0000 UTC m=+0.179435637 container attach 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:57:55 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 12 16:57:55 np0005481680 gracious_khayyam[94798]: 
Oct 12 16:57:55 np0005481680 gracious_khayyam[94798]: [{"container_id": "3f5483675213", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.13%", "created": "2025-10-12T20:55:02.551248Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-12T20:57:38.219638Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-10-12T20:55:02.429714Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@crash.compute-0", "version": "19.2.3"}, {"container_id": "b5f6ba11bf6e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.50%", "created": "2025-10-12T20:55:38.452918Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-12T20:57:37.468785Z", "memory_usage": 7833911, "ports": [], "service_name": "crash", "started": "2025-10-12T20:55:38.357136Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@crash.compute-1", "version": "19.2.3"}, {"container_id": "4b9de78a744f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.25%", "created": "2025-10-12T20:56:34.627967Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-12T20:57:37.820653Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-10-12T20:56:34.534646Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@crash.compute-2", "version": "19.2.3"}, {"container_id": "6f8c72bc2e25", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "26.78%", "created": "2025-10-12T20:54:21.314000Z", "daemon_id": "compute-0.fmjeht", "daemon_name": "mgr.compute-0.fmjeht", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-12T20:57:38.219569Z", "memory_usage": 544001228, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-12T20:54:21.211856Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mgr.compute-0.fmjeht", "version": "19.2.3"}, {"container_id": "34834c110853", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "37.61%", "created": "2025-10-12T20:56:32.630603Z", "daemon_id": "compute-1.orllvh", "daemon_name": "mgr.compute-1.orllvh", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-12T20:57:37.469160Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2025-10-12T20:56:32.512708Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mgr.compute-1.orllvh", "version": "19.2.3"}, {"container_id": "863d0d476535", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "34.57%", "created": "2025-10-12T20:56:26.507787Z", "daemon_id": "compute-2.iamnla", "daemon_name": "mgr.compute-2.iamnla", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-12T20:57:37.820580Z", "memory_usage": 504260198, "ports": [8765], "service_name": "mgr", "started": "2025-10-12T20:56:26.368104Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mgr.compute-2.iamnla", "version": "19.2.3"}, {"container_id": "88c795ee6783", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.75%", "created": "2025-10-12T20:54:17.309885Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-12T20:57:38.219484Z", "memory_request": 2147483648, "memory_usage": 58154024, "ports": [], "service_name": "mon", "started": "2025-10-12T20:54:19.374572Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mon.compute-0", "version": "19.2.3"}, {"container_id": "2df577a6089d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.10%", "created": "2025-10-12T20:56:21.615260Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-12T20:57:37.469015Z", "memory_request": 2147483648, "memory_usage": 44941967, "ports": [], "service_name": "mon", "started": "2025-10-12T20:56:21.467568Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mon.compute-1", "version": "19.2.3"}, {"container_id": "6c11e918df93", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.79%", "created": "2025-10-12T20:56:19.366629Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-12T20:57:37.820479Z", "memory_request": 2147483648, "memory_usage": 46913290, "ports": [], "service_name": "mon", "started": "2025-10-12T20:56:19.228184Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5adb8c35-1b74-5730-a252-62321f654cd5@mon.compute-2", "version": "19.2.3"}, {"container_id": "71c05854769d", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vzqubv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vzqubv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:55 np0005481680 systemd[1]: libpod-3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f.scope: Deactivated successfully.
Oct 12 16:57:55 np0005481680 conmon[94798]: conmon 3eb9ca5e58f31cf33911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f.scope/container/memory.events
Oct 12 16:57:55 np0005481680 podman[94783]: 2025-10-12 20:57:55.348631504 +0000 UTC m=+0.622893698 container died 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vzqubv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-73f5ecc44c36499deb07996de15225ee8b4b76ad090b56c48789238ac8155a79-merged.mount: Deactivated successfully.
Oct 12 16:57:55 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.vzqubv on compute-1
Oct 12 16:57:55 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.vzqubv on compute-1
Oct 12 16:57:55 np0005481680 podman[94783]: 2025-10-12 20:57:55.392247265 +0000 UTC m=+0.666509469 container remove 3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f (image=quay.io/ceph/ceph:v19, name=gracious_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:57:55 np0005481680 systemd[1]: libpod-conmon-3eb9ca5e58f31cf33911cebeff692e659216763f9bd4fd1e065fb52c60d9673f.scope: Deactivated successfully.
Oct 12 16:57:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v17: 132 pgs: 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:55 np0005481680 rsyslogd[998]: message too long (12594) with configured size 8096, begin of message is: [{"container_id": "3f5483675213", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 12 16:57:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 40 pg[9.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct 12 16:57:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vzqubv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vzqubv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: Deploying daemon rgw.rgw.compute-1.vzqubv on compute-1
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/2930687201' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 12 16:57:56 np0005481680 python3[94862]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:56 np0005481680 podman[94863]: 2025-10-12 20:57:56.492736013 +0000 UTC m=+0.049359072 container create 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 16:57:56 np0005481680 systemd[1]: Started libpod-conmon-989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff.scope.
Oct 12 16:57:56 np0005481680 podman[94863]: 2025-10-12 20:57:56.469890008 +0000 UTC m=+0.026513147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592092d75c07dc3984b7209c30fcb36d9b131c6f663c7e8971f36fbcaedcdbff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592092d75c07dc3984b7209c30fcb36d9b131c6f663c7e8971f36fbcaedcdbff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:56 np0005481680 podman[94863]: 2025-10-12 20:57:56.582123628 +0000 UTC m=+0.138746727 container init 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:57:56 np0005481680 podman[94863]: 2025-10-12 20:57:56.588105344 +0000 UTC m=+0.144728433 container start 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 16:57:56 np0005481680 podman[94863]: 2025-10-12 20:57:56.591983178 +0000 UTC m=+0.148606267 container attach 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 12 16:57:56 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 12 16:57:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 41 pg[9.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676697334' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 12 16:57:57 np0005481680 agitated_zhukovsky[94878]: 
Oct 12 16:57:57 np0005481680 agitated_zhukovsky[94878]: {"fsid":"5adb8c35-1b74-5730-a252-62321f654cd5","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":86,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1760302614,"num_in_osds":3,"osd_in_since":1760302596,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":132}],"num_pgs":132,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":84201472,"bytes_avail":64327725056,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-10-12T20:57:36:642338+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":3,"modified":"2025-10-12T20:57:00.493331+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.orllvh":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.iamnla":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"91a5563f-6b04-4bfe-8c40-3180eb6c0d27":{"message":"Updating rgw.rgw deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Oct 12 16:57:57 np0005481680 systemd[1]: libpod-989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff.scope: Deactivated successfully.
Oct 12 16:57:57 np0005481680 podman[94863]: 2025-10-12 20:57:57.03837485 +0000 UTC m=+0.594997909 container died 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-592092d75c07dc3984b7209c30fcb36d9b131c6f663c7e8971f36fbcaedcdbff-merged.mount: Deactivated successfully.
Oct 12 16:57:57 np0005481680 podman[94863]: 2025-10-12 20:57:57.088273704 +0000 UTC m=+0.644896743 container remove 989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff (image=quay.io/ceph/ceph:v19, name=agitated_zhukovsky, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 16:57:57 np0005481680 systemd[1]: libpod-conmon-989511816b9fb85c0e2fa10e63c10a0d6e8fd256e19e80ff7a2e17f9c692dfff.scope: Deactivated successfully.
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzclae", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzclae", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzclae", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:57 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.gzclae on compute-0
Oct 12 16:57:57 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.gzclae on compute-0
Oct 12 16:57:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v20: 133 pgs: 1 creating+peering, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzclae", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gzclae", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: Deploying daemon rgw.rgw.compute-0.gzclae on compute-0
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 12 16:57:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:57 np0005481680 podman[95014]: 2025-10-12 20:57:57.929919003 +0000 UTC m=+0.051155905 container create 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:57 np0005481680 systemd[1]: Started libpod-conmon-446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281.scope.
Oct 12 16:57:57 np0005481680 podman[95014]: 2025-10-12 20:57:57.901770419 +0000 UTC m=+0.023007411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:58 np0005481680 podman[95014]: 2025-10-12 20:57:58.044594964 +0000 UTC m=+0.165831956 container init 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:57:58 np0005481680 podman[95014]: 2025-10-12 20:57:58.051978284 +0000 UTC m=+0.173215196 container start 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:57:58 np0005481680 podman[95014]: 2025-10-12 20:57:58.055357736 +0000 UTC m=+0.176594708 container attach 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:57:58 np0005481680 loving_bardeen[95043]: 167 167
Oct 12 16:57:58 np0005481680 systemd[1]: libpod-446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281.scope: Deactivated successfully.
Oct 12 16:57:58 np0005481680 podman[95014]: 2025-10-12 20:57:58.05753858 +0000 UTC m=+0.178775492 container died 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:57:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a6110ccd12000cb0a35500ae18c2307a8a38f7a874b38876063978f4b0c3c77b-merged.mount: Deactivated successfully.
Oct 12 16:57:58 np0005481680 podman[95014]: 2025-10-12 20:57:58.106602653 +0000 UTC m=+0.227839555 container remove 446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 16:57:58 np0005481680 systemd[1]: libpod-conmon-446357062e775f39a2e348d645e73bfae38d57b060e8153fd936c50252bb5281.scope: Deactivated successfully.
Oct 12 16:57:58 np0005481680 systemd[1]: Reloading.
Oct 12 16:57:58 np0005481680 python3[95058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:57:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:57:58 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.253074687 +0000 UTC m=+0.048392468 container create c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.236886774 +0000 UTC m=+0.032204575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:57:58 np0005481680 systemd[1]: Started libpod-conmon-c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064.scope.
Oct 12 16:57:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:57:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265d2bc9794877f0fdf07b770358dbaada484640464fdeebe3915c69be8ce086/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265d2bc9794877f0fdf07b770358dbaada484640464fdeebe3915c69be8ce086/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:58 np0005481680 systemd[1]: Reloading.
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.496522741 +0000 UTC m=+0.291840602 container init c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.503100701 +0000 UTC m=+0.298418522 container start c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.508337109 +0000 UTC m=+0.303654920 container attach c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:57:58 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:57:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:57:58 np0005481680 systemd[1]: Starting Ceph rgw.rgw.compute-0.gzclae for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514296960' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 12 16:57:58 np0005481680 intelligent_jones[95124]: 
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/1297682603' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/3769754786' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:58 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 12 16:57:58 np0005481680 intelligent_jones[95124]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.fmjeht/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.orllvh/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.iamnla/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502946918","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.gzclae","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.vzqubv","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.lonmvq","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 12 16:57:58 np0005481680 systemd[1]: libpod-c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064.scope: Deactivated successfully.
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.86673298 +0000 UTC m=+0.662050791 container died c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:57:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-265d2bc9794877f0fdf07b770358dbaada484640464fdeebe3915c69be8ce086-merged.mount: Deactivated successfully.
Oct 12 16:57:58 np0005481680 podman[95075]: 2025-10-12 20:57:58.909547541 +0000 UTC m=+0.704865322 container remove c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064 (image=quay.io/ceph/ceph:v19, name=intelligent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:57:58 np0005481680 systemd[1]: libpod-conmon-c4427aaa842c40d9f3ca4a01f3668e3dce33140adc3cb099ded154bba3333064.scope: Deactivated successfully.
Oct 12 16:57:58 np0005481680 podman[95253]: 2025-10-12 20:57:58.994835146 +0000 UTC m=+0.046635685 container create 1d6909677a6dce291dd1f2c1b40f2f5717354d3049f26934c225d52ee89751f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-rgw-rgw-compute-0-gzclae, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 16:57:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a36b979ccca015838e5be10c06c507581f4b9c14948b91e3ec8dc264cbc745e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a36b979ccca015838e5be10c06c507581f4b9c14948b91e3ec8dc264cbc745e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a36b979ccca015838e5be10c06c507581f4b9c14948b91e3ec8dc264cbc745e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a36b979ccca015838e5be10c06c507581f4b9c14948b91e3ec8dc264cbc745e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.gzclae supports timestamps until 2038 (0x7fffffff)
Oct 12 16:57:59 np0005481680 podman[95253]: 2025-10-12 20:57:58.974098181 +0000 UTC m=+0.025898710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:57:59 np0005481680 podman[95253]: 2025-10-12 20:57:59.06977371 +0000 UTC m=+0.121574299 container init 1d6909677a6dce291dd1f2c1b40f2f5717354d3049f26934c225d52ee89751f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-rgw-rgw-compute-0-gzclae, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:57:59 np0005481680 podman[95253]: 2025-10-12 20:57:59.074700429 +0000 UTC m=+0.126500958 container start 1d6909677a6dce291dd1f2c1b40f2f5717354d3049f26934c225d52ee89751f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-rgw-rgw-compute-0-gzclae, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:57:59 np0005481680 bash[95253]: 1d6909677a6dce291dd1f2c1b40f2f5717354d3049f26934c225d52ee89751f5
Oct 12 16:57:59 np0005481680 systemd[1]: Started Ceph rgw.rgw.compute-0.gzclae for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:57:59 np0005481680 radosgw[95273]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:57:59 np0005481680 radosgw[95273]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 12 16:57:59 np0005481680 radosgw[95273]: framework: beast
Oct 12 16:57:59 np0005481680 radosgw[95273]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 12 16:57:59 np0005481680 radosgw[95273]: init_numa not setting numa affinity
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 91a5563f-6b04-4bfe-8c40-3180eb6c0d27 (Updating rgw.rgw deployment (+3 -> 3))
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 91a5563f-6b04-4bfe-8c40-3180eb6c0d27 (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev d965f844-f6fe-4cc0-9608-9001e488cba0 (Updating mds.cephfs deployment (+3 -> 3))
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vonnzo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vonnzo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vonnzo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.vonnzo on compute-2
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.vonnzo on compute-2
Oct 12 16:57:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v23: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vonnzo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vonnzo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: Deploying daemon mds.cephfs.compute-2.vonnzo on compute-2
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 12 16:57:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 python3[95887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.130194023 +0000 UTC m=+0.048734537 container create 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:58:00 np0005481680 systemd[1]: Started libpod-conmon-98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626.scope.
Oct 12 16:58:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.106038115 +0000 UTC m=+0.024578729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:58:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3ac49ef400cea7671619abaebf932142f027e02e7477c6807a1a8fc9b5c17/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3ac49ef400cea7671619abaebf932142f027e02e7477c6807a1a8fc9b5c17/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.215467968 +0000 UTC m=+0.134008512 container init 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.221995386 +0000 UTC m=+0.140535920 container start 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.226557178 +0000 UTC m=+0.145097722 container attach 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 16:58:00 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 44 pg[11.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268052317' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 12 16:58:00 np0005481680 hopeful_jang[95903]: mimic
Oct 12 16:58:00 np0005481680 systemd[1]: libpod-98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626.scope: Deactivated successfully.
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.592639095 +0000 UTC m=+0.511179649 container died 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:58:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-87f3ac49ef400cea7671619abaebf932142f027e02e7477c6807a1a8fc9b5c17-merged.mount: Deactivated successfully.
Oct 12 16:58:00 np0005481680 podman[95888]: 2025-10-12 20:58:00.639527297 +0000 UTC m=+0.558067831 container remove 98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626 (image=quay.io/ceph/ceph:v19, name=hopeful_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:58:00 np0005481680 systemd[1]: libpod-conmon-98c22d7238ec9722a40d903839631fee73d9fb5f509ad499c4d73848d73d1626.scope: Deactivated successfully.
Oct 12 16:58:00 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 12 completed events
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:00 np0005481680 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 12 16:58:00 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 45 pg[11.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/1297682603' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/3769754786' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:00 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nlzxsf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nlzxsf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nlzxsf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.nlzxsf on compute-0
Oct 12 16:58:01 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.nlzxsf on compute-0
Oct 12 16:58:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v26: 135 pgs: 2 unknown, 1 creating+peering, 132 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:58:01 np0005481680 python3[96022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:58:01 np0005481680 podman[96031]: 2025-10-12 20:58:01.803023178 +0000 UTC m=+0.059182062 container create 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 16:58:01 np0005481680 systemd[1]: Started libpod-conmon-88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3.scope.
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 12 16:58:01 np0005481680 podman[96031]: 2025-10-12 20:58:01.76902332 +0000 UTC m=+0.025182254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e3 new map
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-10-12T20:58:01:875312+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:57:36.642285+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.vonnzo{-1:24223} state up:standby seq 1 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] up:boot
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] as mds.0
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vonnzo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 12 16:58:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vonnzo"} v 0)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vonnzo"}]: dispatch
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e3 all = 0
Oct 12 16:58:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8d931f9806099057e93af2d670cc5dab7f2681516babd6bcb55e1fadfad90c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8d931f9806099057e93af2d670cc5dab7f2681516babd6bcb55e1fadfad90c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e4 new map
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-10-12T20:58:01:888706+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:01.888695+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.vonnzo{0:24223} state up:creating seq 1 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:creating}
Oct 12 16:58:01 np0005481680 podman[96031]: 2025-10-12 20:58:01.926228306 +0000 UTC m=+0.182387220 container init 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 16:58:01 np0005481680 podman[96031]: 2025-10-12 20:58:01.937975991 +0000 UTC m=+0.194134875 container start 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:58:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vonnzo is now active in filesystem cephfs as rank 0
Oct 12 16:58:01 np0005481680 podman[96031]: 2025-10-12 20:58:01.941766874 +0000 UTC m=+0.197925758 container attach 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.038103758 +0000 UTC m=+0.059161001 container create df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:58:02 np0005481680 systemd[1]: Started libpod-conmon-df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85.scope.
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.016211945 +0000 UTC m=+0.037269178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:58:02 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.131579312 +0000 UTC m=+0.152636625 container init df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.141639618 +0000 UTC m=+0.162696871 container start df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.145162063 +0000 UTC m=+0.166219306 container attach df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:58:02 np0005481680 laughing_feistel[96108]: 167 167
Oct 12 16:58:02 np0005481680 systemd[1]: libpod-df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85.scope: Deactivated successfully.
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.148275868 +0000 UTC m=+0.169333121 container died df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:58:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c821c807ccfcd74a69098970e4bfa4e1b1eaf6c1246867a60b7a0bc25c48bfaa-merged.mount: Deactivated successfully.
Oct 12 16:58:02 np0005481680 podman[96084]: 2025-10-12 20:58:02.20258839 +0000 UTC m=+0.223645643 container remove df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct 12 16:58:02 np0005481680 systemd[1]: libpod-conmon-df6c494195d6566a667a40e1c3062234bad1a3bbcf49d8d205b1d1d78cceba85.scope: Deactivated successfully.
Oct 12 16:58:02 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nlzxsf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nlzxsf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: Deploying daemon mds.cephfs.compute-0.nlzxsf on compute-0
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/1297682603' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/3769754786' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: daemon mds.cephfs.compute-2.vonnzo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: Cluster is now healthy
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: daemon mds.cephfs.compute-2.vonnzo is now active in filesystem cephfs as rank 0
Oct 12 16:58:02 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:02 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579517128' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 12 16:58:02 np0005481680 wonderful_wescoff[96065]: 
Oct 12 16:58:02 np0005481680 wonderful_wescoff[96065]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":1},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":10}}
Oct 12 16:58:02 np0005481680 podman[96031]: 2025-10-12 20:58:02.427314539 +0000 UTC m=+0.683473393 container died 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:58:02 np0005481680 systemd[1]: libpod-88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3.scope: Deactivated successfully.
Oct 12 16:58:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4a8d931f9806099057e93af2d670cc5dab7f2681516babd6bcb55e1fadfad90c-merged.mount: Deactivated successfully.
Oct 12 16:58:02 np0005481680 podman[96031]: 2025-10-12 20:58:02.554623986 +0000 UTC m=+0.810782830 container remove 88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3 (image=quay.io/ceph/ceph:v19, name=wonderful_wescoff, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:58:02 np0005481680 systemd[1]: libpod-conmon-88cff7e50b56bebb119b5f10537d59c17878ea3ed6b70fe194e54c8f7012b8e3.scope: Deactivated successfully.
Oct 12 16:58:02 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:02 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:02 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 12 16:58:02 np0005481680 systemd[1]: Starting Ceph mds.cephfs.compute-0.nlzxsf for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e5 new map
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-10-12T20:58:02:896214+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:02.896211+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] up:active
Oct 12 16:58:02 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active}
Oct 12 16:58:03 np0005481680 podman[96270]: 2025-10-12 20:58:03.248555991 +0000 UTC m=+0.051390321 container create b2ffb4f3bfcbfce556d959f57ad55f0a4b8ba6948b45e9fee09e10c2ad5cec55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mds-cephfs-compute-0-nlzxsf, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:58:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae4906306d9fdd423d241fdecc56e9c2ebff2788abfac388f70a38712ccf914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae4906306d9fdd423d241fdecc56e9c2ebff2788abfac388f70a38712ccf914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae4906306d9fdd423d241fdecc56e9c2ebff2788abfac388f70a38712ccf914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae4906306d9fdd423d241fdecc56e9c2ebff2788abfac388f70a38712ccf914/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.nlzxsf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:03 np0005481680 podman[96270]: 2025-10-12 20:58:03.222564609 +0000 UTC m=+0.025398979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.102:0/3769754786' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.101:0/1297682603' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 12 16:58:03 np0005481680 podman[96270]: 2025-10-12 20:58:03.328110207 +0000 UTC m=+0.130944517 container init b2ffb4f3bfcbfce556d959f57ad55f0a4b8ba6948b45e9fee09e10c2ad5cec55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mds-cephfs-compute-0-nlzxsf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 16:58:03 np0005481680 podman[96270]: 2025-10-12 20:58:03.337336852 +0000 UTC m=+0.140171182 container start b2ffb4f3bfcbfce556d959f57ad55f0a4b8ba6948b45e9fee09e10c2ad5cec55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mds-cephfs-compute-0-nlzxsf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:58:03 np0005481680 bash[96270]: b2ffb4f3bfcbfce556d959f57ad55f0a4b8ba6948b45e9fee09e10c2ad5cec55
Oct 12 16:58:03 np0005481680 systemd[1]: Started Ceph mds.cephfs.compute-0.nlzxsf for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:03 np0005481680 ceph-mds[96289]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:58:03 np0005481680 ceph-mds[96289]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 12 16:58:03 np0005481680 ceph-mds[96289]: main not setting numa affinity
Oct 12 16:58:03 np0005481680 ceph-mds[96289]: pidfile_write: ignore empty --pid-file
Oct 12 16:58:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mds-cephfs-compute-0-nlzxsf[96285]: starting mds.cephfs.compute-0.nlzxsf at 
Oct 12 16:58:03 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Updating MDS map to version 5 from mon.0
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ophvii", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ophvii", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ophvii", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:03 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ophvii on compute-1
Oct 12 16:58:03 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ophvii on compute-1
Oct 12 16:58:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v29: 136 pgs: 1 unknown, 135 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 2.0 KiB/s wr, 13 op/s
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 12 16:58:03 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 12 16:58:04 np0005481680 radosgw[95273]: v1 topic migration: starting v1 topic migration..
Oct 12 16:58:04 np0005481680 radosgw[95273]: LDAP not started since no server URIs were provided in the configuration.
Oct 12 16:58:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-rgw-rgw-compute-0-gzclae[95269]: 2025-10-12T20:58:04.124+0000 7f51cc3d3980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 12 16:58:04 np0005481680 radosgw[95273]: v1 topic migration: finished v1 topic migration
Oct 12 16:58:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 12 16:58:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 12 16:58:04 np0005481680 radosgw[95273]: framework: beast
Oct 12 16:58:04 np0005481680 radosgw[95273]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 12 16:58:04 np0005481680 radosgw[95273]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 12 16:58:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 12 16:58:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 12 16:58:04 np0005481680 radosgw[95273]: starting handler: beast
Oct 12 16:58:04 np0005481680 radosgw[95273]: set uid:gid to 167:167 (ceph:ceph)
Oct 12 16:58:04 np0005481680 radosgw[95273]: mgrc service_daemon_register rgw.14571 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.gzclae,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=aa1567c2-16e1-4167-a21d-7fbac8b71d4e,zone_name=default,zonegroup_id=398c7fe1-e9cc-46c2-b25a-6ed4bee70879,zonegroup_name=default}
Oct 12 16:58:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e6 new map
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-10-12T20:58:04:324688+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:02.896211+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:04 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Updating MDS map to version 6 from mon.0
Oct 12 16:58:04 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Monitors have assigned me to become a standby
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] up:boot
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 1 up:standby
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.nlzxsf"} v 0)
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.nlzxsf"}]: dispatch
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e6 all = 0
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e7 new map
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-10-12T20:58:04:359935+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:02.896211+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 1 up:standby
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ophvii", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ophvii", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: Deploying daemon mds.cephfs.compute-1.ophvii on compute-1
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='client.? 192.168.122.100:0/462267527' entity='client.rgw.rgw.compute-0.gzclae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-2.lonmvq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: from='client.? ' entity='client.rgw.rgw.compute-1.vzqubv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 12 16:58:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev d965f844-f6fe-4cc0-9608-9001e488cba0 (Updating mds.cephfs deployment (+3 -> 3))
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event d965f844-f6fe-4cc0-9608-9001e488cba0 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev d40dc7fc-032a-4a16-ad3a-21cdda6ef719 (Updating nfs.cephfs deployment (+3 -> 3))
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e8 new map
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-10-12T20:58:05:492499+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:02.896211+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 2 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ophvii{-1:24233} state up:standby seq 1 addr [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] up:boot
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 2 up:standby
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ophvii"} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ophvii"}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e8 all = 0
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v31: 136 pgs: 1 unknown, 135 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 1.7 KiB/s wr, 11 op/s
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc-rgw
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc-rgw
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.mxbywc's ganesha conf is defaulting to empty
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.mxbywc's ganesha conf is defaulting to empty
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.mxbywc on compute-1
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.mxbywc on compute-1
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 13 completed events
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.0.0.compute-1.mxbywc-rgw
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mxbywc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Bind address in nfs.cephfs.0.0.compute-1.mxbywc's ganesha conf is defaulting to empty
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: Deploying daemon nfs.cephfs.0.0.compute-1.mxbywc on compute-1
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e9 new map
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-10-12T20:58:06:750756+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:05.940233+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ophvii{-1:24233} state up:standby seq 1 addr [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] up:active
Oct 12 16:58:06 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 2 up:standby
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.wptquy
Oct 12 16:58:07 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.wptquy
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:07 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 12 16:58:07 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v32: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 9.3 KiB/s wr, 418 op/s
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.1.0.compute-2.wptquy
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:07 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e10 new map
Oct 12 16:58:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-10-12T20:58:08:332920+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:05.940233+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ophvii{-1:24233} state up:standby seq 1 addr [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:08 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Updating MDS map to version 10 from mon.0
Oct 12 16:58:08 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] up:standby
Oct 12 16:58:08 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 2 up:standby
Oct 12 16:58:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v33: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 6.2 KiB/s wr, 316 op/s
Oct 12 16:58:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 new map
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2025-10-12T20:58:10:361027+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-12T20:57:36.642285+0000#012modified#0112025-10-12T20:58:05.940233+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24223}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24223 members: 24223#012[mds.cephfs.compute-2.vonnzo{0:24223} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2427484792,v1:192.168.122.102:6805/2427484792] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.nlzxsf{-1:14589} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3837205675,v1:192.168.122.100:6807/3837205675] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.ophvii{-1:24233} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] compat {c=[1],r=[1],i=[1fff]}]
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2666767602,v1:192.168.122.101:6805/2666767602] up:standby
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 2 up:standby
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.wptquy-rgw
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.wptquy-rgw
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.wptquy's ganesha conf is defaulting to empty
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.wptquy's ganesha conf is defaulting to empty
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.wptquy on compute-2
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.wptquy on compute-2
Oct 12 16:58:10 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 8108728a-2b20-41ca-9e9d-b1102f4112c5 (Global Recovery Event) in 10 seconds
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.1.0.compute-2.wptquy-rgw
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wptquy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: Bind address in nfs.cephfs.1.0.compute-2.wptquy's ganesha conf is defaulting to empty
Oct 12 16:58:11 np0005481680 ceph-mon[73608]: Deploying daemon nfs.cephfs.1.0.compute-2.wptquy on compute-2
Oct 12 16:58:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 5.5 KiB/s wr, 280 op/s
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.hypubd
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.hypubd
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.hypubd-rgw
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.hypubd-rgw
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.hypubd's ganesha conf is defaulting to empty
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.hypubd's ganesha conf is defaulting to empty
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:58:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.hypubd on compute-0
Oct 12 16:58:12 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.hypubd on compute-0
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.452602534 +0000 UTC m=+0.050079580 container create 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 16:58:13 np0005481680 systemd[1]: Started libpod-conmon-3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad.scope.
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.2.0.compute-0.hypubd
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 12 16:58:13 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.hypubd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.433156641 +0000 UTC m=+0.030633667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:58:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.551755227 +0000 UTC m=+0.149232283 container init 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.559403042 +0000 UTC m=+0.156880058 container start 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.563637135 +0000 UTC m=+0.161114191 container attach 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:58:13 np0005481680 exciting_payne[96557]: 167 167
Oct 12 16:58:13 np0005481680 systemd[1]: libpod-3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad.scope: Deactivated successfully.
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.568183477 +0000 UTC m=+0.165660523 container died 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:58:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-63d43bf97950d231041e6654e96f32d6b24bd9fd33a6fc6f3a148f31e4c556ab-merged.mount: Deactivated successfully.
Oct 12 16:58:13 np0005481680 podman[96541]: 2025-10-12 20:58:13.615269712 +0000 UTC m=+0.212746758 container remove 3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:58:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 5.5 KiB/s wr, 247 op/s
Oct 12 16:58:13 np0005481680 systemd[1]: libpod-conmon-3858e6ef977123227de1c9ac213b462db436189e71d677cd893cd9c5813a5cad.scope: Deactivated successfully.
Oct 12 16:58:13 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:13 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:13 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:13 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:14 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:14 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:14 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: Rados config object exists: conf-nfs.cephfs
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: Creating key for client.nfs.cephfs.2.0.compute-0.hypubd-rgw
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: Bind address in nfs.cephfs.2.0.compute-0.hypubd's ganesha conf is defaulting to empty
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: Deploying daemon nfs.cephfs.2.0.compute-0.hypubd on compute-0
Oct 12 16:58:14 np0005481680 podman[96699]: 2025-10-12 20:58:14.552264191 +0000 UTC m=+0.046683906 container create faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 16:58:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775069da103ffa31071d62e0068a918b0346ff2de7a078bc2d094072e6522f81/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775069da103ffa31071d62e0068a918b0346ff2de7a078bc2d094072e6522f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775069da103ffa31071d62e0068a918b0346ff2de7a078bc2d094072e6522f81/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775069da103ffa31071d62e0068a918b0346ff2de7a078bc2d094072e6522f81/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:14 np0005481680 podman[96699]: 2025-10-12 20:58:14.532430609 +0000 UTC m=+0.026850344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:58:14 np0005481680 podman[96699]: 2025-10-12 20:58:14.632544305 +0000 UTC m=+0.126964110 container init faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:58:14 np0005481680 podman[96699]: 2025-10-12 20:58:14.638468149 +0000 UTC m=+0.132887904 container start faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 16:58:14 np0005481680 bash[96699]: faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 16:58:14 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:14 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev d40dc7fc-032a-4a16-ad3a-21cdda6ef719 (Updating nfs.cephfs deployment (+3 -> 3))
Oct 12 16:58:14 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event d40dc7fc-032a-4a16-ad3a-21cdda6ef719 (Updating nfs.cephfs deployment (+3 -> 3)) in 9 seconds
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:14 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 6c58862f-9609-4f5e-9d02-ca9425635b39 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 16:58:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:14 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.jcnfiu on compute-1
Oct 12 16:58:14 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.jcnfiu on compute-1
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 16:58:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 16:58:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 4.7 KiB/s wr, 211 op/s
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:15 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 15 completed events
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:16 np0005481680 ceph-mon[73608]: Deploying daemon haproxy.nfs.cephfs.compute-1.jcnfiu on compute-1
Oct 12 16:58:16 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 6.4 KiB/s wr, 213 op/s
Oct 12 16:58:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.4 KiB/s wr, 9 op/s
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.wruenf on compute-0
Oct 12 16:58:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.wruenf on compute-0
Oct 12 16:58:20 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:20 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:20 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:20 np0005481680 ceph-mon[73608]: Deploying daemon haproxy.nfs.cephfs.compute-0.wruenf on compute-0
Oct 12 16:58:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:20 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe06c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v39: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.4 KiB/s wr, 9 op/s
Oct 12 16:58:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:22 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.126344012 +0000 UTC m=+2.827550112 container create bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.048524519 +0000 UTC m=+2.749730719 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 12 16:58:23 np0005481680 systemd[1]: Started libpod-conmon-bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4.scope.
Oct 12 16:58:23 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.233970811 +0000 UTC m=+2.935176991 container init bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.246044646 +0000 UTC m=+2.947250786 container start bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.250375881 +0000 UTC m=+2.951582031 container attach bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 magical_gates[96978]: 0 0
Oct 12 16:58:23 np0005481680 systemd[1]: libpod-bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4.scope: Deactivated successfully.
Oct 12 16:58:23 np0005481680 conmon[96978]: conmon bf8559ac958967523418 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4.scope/container/memory.events
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.255442123 +0000 UTC m=+2.956648273 container died bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b7222801aa4850e3517cdb8082f0f00675265e35f2f21da3d1f628951676a6cd-merged.mount: Deactivated successfully.
Oct 12 16:58:23 np0005481680 podman[96859]: 2025-10-12 20:58:23.315013983 +0000 UTC m=+3.016220153 container remove bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4 (image=quay.io/ceph/haproxy:2.3, name=magical_gates)
Oct 12 16:58:23 np0005481680 systemd[1]: libpod-conmon-bf8559ac958967523418758749abbd164b6cedde26fc2e1e846da50b2a8a4df4.scope: Deactivated successfully.
Oct 12 16:58:23 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:23 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:23 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.4 KiB/s wr, 9 op/s
Oct 12 16:58:23 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:23 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:23 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:23 np0005481680 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.wruenf for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:24 np0005481680 podman[97123]: 2025-10-12 20:58:24.205665455 +0000 UTC m=+0.041793358 container create 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:58:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e397ab15c1c603c2622e53e3d08086103b0b20c6cc5c062b381f464c0ba9acc6/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:24 np0005481680 podman[97123]: 2025-10-12 20:58:24.276413546 +0000 UTC m=+0.112541419 container init 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:58:24 np0005481680 podman[97123]: 2025-10-12 20:58:24.18611728 +0000 UTC m=+0.022245223 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 12 16:58:24 np0005481680 podman[97123]: 2025-10-12 20:58:24.281715866 +0000 UTC m=+0.117843729 container start 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:58:24 np0005481680 bash[97123]: 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a
Oct 12 16:58:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [NOTICE] 284/205824 (2) : New worker #1 (4) forked
Oct 12 16:58:24 np0005481680 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.wruenf for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.afkvqr on compute-2
Oct 12 16:58:24 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.afkvqr on compute-2
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:24 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:25 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v41: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 12 16:58:25 np0005481680 ceph-mon[73608]: Deploying daemon haproxy.nfs.cephfs.compute-2.afkvqr on compute-2
Oct 12 16:58:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:26 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:27 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v42: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct 12 16:58:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.rcmuhh on compute-2
Oct 12 16:58:28 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.rcmuhh on compute-2
Oct 12 16:58:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:28 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0480016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: Deploying daemon keepalived.nfs.cephfs.compute-2.rcmuhh on compute-2
Oct 12 16:58:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:29 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 16:58:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:29 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0640023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:30 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:31 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0480016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v44: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 16:58:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:31 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:32 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v45: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 16:58:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:33 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:33 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0480016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.xazanh on compute-1
Oct 12 16:58:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.xazanh on compute-1
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.662121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714662231, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6752, "num_deletes": 250, "total_data_size": 12371253, "memory_usage": 13249376, "flush_reason": "Manual Compaction"}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714719297, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11000668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 6889, "table_properties": {"data_size": 10976801, "index_size": 15049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 73982, "raw_average_key_size": 23, "raw_value_size": 10917874, "raw_average_value_size": 3540, "num_data_blocks": 668, "num_entries": 3084, "num_filter_entries": 3084, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302459, "oldest_key_time": 1760302459, "file_creation_time": 1760302714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 57232 microseconds, and 17367 cpu microseconds.
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.719357) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11000668 bytes OK
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.719379) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.720722) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.720738) EVENT_LOG_v1 {"time_micros": 1760302714720733, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.720757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12341139, prev total WAL file size 12341139, number of live WAL files 2.
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.724042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(10MB) 13(57KB) 8(1944B)]
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714724256, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11061090, "oldest_snapshot_seqno": -1}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2907 keys, 11043240 bytes, temperature: kUnknown
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714804532, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11043240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11019679, "index_size": 15196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7301, "raw_key_size": 72978, "raw_average_key_size": 25, "raw_value_size": 10962203, "raw_average_value_size": 3770, "num_data_blocks": 674, "num_entries": 2907, "num_filter_entries": 2907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760302714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.804843) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11043240 bytes
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.807022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.6 rd, 137.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(10.5, 0.0 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3193, records dropped: 286 output_compression: NoCompression
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.807051) EVENT_LOG_v1 {"time_micros": 1760302714807037, "job": 4, "event": "compaction_finished", "compaction_time_micros": 80377, "compaction_time_cpu_micros": 44552, "output_level": 6, "num_output_files": 1, "total_output_size": 11043240, "num_input_records": 3193, "num_output_records": 2907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714809549, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714809692, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302714809811, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 12 16:58:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:34.723838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:34 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v46: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 16:58:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:35 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:58:35
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', '.nfs', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.log']
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 16:58:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:35 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: Deploying daemon keepalived.nfs.cephfs.compute-1.xazanh on compute-1
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 12 16:58:35 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 2b5eb860-3dc4-40e8-9a84-de15817e1654 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 12 16:58:36 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 1d2fab7d-9100-4cd2-8733-3c679ac9ad93 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:36 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v49: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 12 16:58:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:37 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:37 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 12 16:58:37 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 7bfca822-3276-44a3-8fd3-39ed6e26494a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.zelovc on compute-0
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.zelovc on compute-0
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 12 16:58:38 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev dc260220-47c1-4992-859a-7c656627630e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 51 pg[6.0( v 47'39 (0'0,47'39] local-lis/les=17/18 n=22 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=51 pruub=8.122042656s) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 46'38 mlcod 46'38 active pruub 174.191192627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.0( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=51 pruub=8.122042656s) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 46'38 mlcod 0'0 unknown pruub 174.191192627s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.8( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.9( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.1( v 47'39 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.6( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.a( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.e( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.2( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.5( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.4( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.c( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.7( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.b( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.d( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.3( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 52 pg[6.f( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=17/18 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:38 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v52: 182 pgs: 46 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:39 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:39 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: Deploying daemon keepalived.nfs.cephfs.compute-0.zelovc on compute-0
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 12 16:58:39 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 2278a7bc-93d5-474b-899a-703fc15c31b6 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[8.0( v 48'45 (0'0,48'45] local-lis/les=37/38 n=5 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53 pruub=12.846869469s) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 48'44 mlcod 48'44 active pruub 179.925918579s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[9.0( v 41'6 (0'0,41'6] local-lis/les=40/41 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=53 pruub=12.884513855s) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 41'5 mlcod 41'5 active pruub 179.964172363s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[9.0( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=53 pruub=12.884513855s) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 41'5 mlcod 0'0 unknown pruub 179.964172363s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[8.0( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53 pruub=12.846869469s) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 48'44 mlcod 0'0 unknown pruub 179.925918579s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.c( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.b( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.a( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.9( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.e( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.f( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.2( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.5( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.0( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 46'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.3( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.4( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.6( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.7( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.1( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.8( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:39 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 53 pg[6.d( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=17/17 les/c/f=18/18/0 sis=51) [0] r=0 lpr=51 pi=[17,51)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 12 16:58:40 np0005481680 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 12 16:58:40 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev 1af3d18f-323b-4c33-a791-2ce42aeffae5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct 12 16:58:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.19( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1e( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.18( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.16( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1f( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.17( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.17( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.16( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.10( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.3( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.11( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.2( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.4( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.5( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.6( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.7( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.13( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.12( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.12( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.13( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1d( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1c( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1d( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1c( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1f( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1e( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.18( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1b( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.19( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1a( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1a( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1b( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.4( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.5( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.7( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.6( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1( v 41'6 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1( v 48'45 (0'0,48'45] local-lis/les=37/38 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.a( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.b( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.d( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.c( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.d( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.c( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.f( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.e( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.b( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.a( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.8( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.9( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.8( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.9( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.f( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.e( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.3( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.2( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.10( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.11( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.15( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.14( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.15( v 41'6 lc 0'0 (0'0,41'6] local-lis/les=40/41 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.14( v 48'45 lc 0'0 (0'0,48'45] local-lis/les=37/38 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.19( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1e( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.16( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.17( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.3( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.4( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.7( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.13( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.10( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.13( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.12( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1f( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1e( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1d( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.18( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1c( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1b( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1a( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1a( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.0( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 48'44 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.5( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.1( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.0( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 41'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.6( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.7( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.1( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.a( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.c( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.e( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.f( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.b( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.8( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.9( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:40 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.2( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.11( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.14( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.15( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[8.10( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [0] r=0 lpr=53 pi=[37,53)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:40 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 54 pg[9.e( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=40/40 les/c/f=41/41/0 sis=53) [0] r=0 lpr=53 pi=[40,53)/1 crt=41'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:41 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 12 16:58:41 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 12 16:58:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v55: 244 pgs: 108 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:58:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:41 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:41 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 12 16:58:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 55 pg[11.0( v 45'48 (0'0,45'48] local-lis/les=44/45 n=8 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.790771484s) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 45'47 mlcod 45'47 active pruub 184.022109985s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 55 pg[11.0( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.790771484s) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 45'47 mlcod 0'0 unknown pruub 184.022109985s@ mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev accd6323-6639-4827-ad44-c5bd3ff100e2 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 2b5eb860-3dc4-40e8-9a84-de15817e1654 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 2b5eb860-3dc4-40e8-9a84-de15817e1654 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 1d2fab7d-9100-4cd2-8733-3c679ac9ad93 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 1d2fab7d-9100-4cd2-8733-3c679ac9ad93 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 7bfca822-3276-44a3-8fd3-39ed6e26494a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 7bfca822-3276-44a3-8fd3-39ed6e26494a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev dc260220-47c1-4992-859a-7c656627630e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event dc260220-47c1-4992-859a-7c656627630e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 2278a7bc-93d5-474b-899a-703fc15c31b6 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 2278a7bc-93d5-474b-899a-703fc15c31b6 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 1af3d18f-323b-4c33-a791-2ce42aeffae5 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 1af3d18f-323b-4c33-a791-2ce42aeffae5 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev accd6323-6639-4827-ad44-c5bd3ff100e2 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct 12 16:58:42 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event accd6323-6639-4827-ad44-c5bd3ff100e2 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:42 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.267249144 +0000 UTC m=+2.723258186 container create ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, version=2.2.4, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.246261743 +0000 UTC m=+2.702270775 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 12 16:58:42 np0005481680 systemd[1]: Started libpod-conmon-ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98.scope.
Oct 12 16:58:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.353292007 +0000 UTC m=+2.809301069 container init ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc.)
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.364679784 +0000 UTC m=+2.820688816 container start ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, name=keepalived, io.buildah.version=1.28.2, com.redhat.component=keepalived-container)
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.367902043 +0000 UTC m=+2.823911075 container attach ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Oct 12 16:58:42 np0005481680 relaxed_jackson[97343]: 0 0
Oct 12 16:58:42 np0005481680 systemd[1]: libpod-ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98.scope: Deactivated successfully.
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.369172714 +0000 UTC m=+2.825181736 container died ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived)
Oct 12 16:58:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-63eb6b6148d2e123b598fda79b7905a0616e2a4dc2b6bc4be5cdad615fe15b59-merged.mount: Deactivated successfully.
Oct 12 16:58:42 np0005481680 podman[97247]: 2025-10-12 20:58:42.4154293 +0000 UTC m=+2.871438352 container remove ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98 (image=quay.io/ceph/keepalived:2.2.4, name=relaxed_jackson, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Oct 12 16:58:42 np0005481680 systemd[1]: libpod-conmon-ce9dde1488fe6df91ad3eb10badd47648a29f85c62aa9e3768e61d07a3243e98.scope: Deactivated successfully.
Oct 12 16:58:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 12 16:58:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 12 16:58:42 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:42 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:42 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:42 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:42 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:42 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:42 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.17( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.16( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.13( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.9( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.8( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.2( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.3( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.7( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.4( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.18( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.19( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1a( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1d( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1e( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1f( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.11( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.10( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.5( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.6( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1( v 45'48 (0'0,45'48] local-lis/les=44/45 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.12( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.15( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.14( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1b( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1c( v 45'48 lc 0'0 (0'0,45'48] local-lis/les=44/45 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.16( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.0( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 45'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.17( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.13( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.e( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.c( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.9( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.d( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.8( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.b( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.f( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.3( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.7( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.19( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.4( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.18( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1d( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1e( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.11( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1f( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.10( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.2( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.6( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.15( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.14( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.12( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1b( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.1c( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 56 pg[11.5( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:43 np0005481680 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.zelovc for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:43 np0005481680 podman[97493]: 2025-10-12 20:58:43.408468123 +0000 UTC m=+0.040933727 container create 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, vcs-type=git, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, distribution-scope=public)
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 12 16:58:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 12 16:58:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d389c7b33d08f2b4f0d3360352a9e4c81a0b84c3468af4def44ce128ba32ed22/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:43 np0005481680 podman[97493]: 2025-10-12 20:58:43.391761916 +0000 UTC m=+0.024227520 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 12 16:58:43 np0005481680 podman[97493]: 2025-10-12 20:58:43.49135868 +0000 UTC m=+0.123824274 container init 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, vendor=Red Hat, Inc., distribution-scope=public, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 12 16:58:43 np0005481680 podman[97493]: 2025-10-12 20:58:43.498931223 +0000 UTC m=+0.131396817 container start 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Oct 12 16:58:43 np0005481680 bash[97493]: 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b
Oct 12 16:58:43 np0005481680 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.zelovc for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Starting VRRP child process, pid=4
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: Startup complete
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: (VI_0) Entering BACKUP STATE (init)
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:43 2025: VRRP_Script(check_backend) succeeded
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev 6c58862f-9609-4f5e-9d02-ca9425635b39 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 6c58862f-9609-4f5e-9d02-ca9425635b39 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 29 seconds
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v58: 306 pgs: 62 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct 12 16:58:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev cad557e5-07b9-440f-9ad2-b536f83422b5 (Updating alertmanager deployment (+1 -> 1))
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:43 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct 12 16:58:43 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct 12 16:58:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:43 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 12 16:58:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 12 16:58:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 12 16:58:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:44 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:45 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 12 16:58:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 12 16:58:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 12 16:58:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 93 unknown, 244 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:58:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:45 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0400016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:45 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:45 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 23 completed events
Oct 12 16:58:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.075302735 +0000 UTC m=+1.763721158 volume create 8c210d503421811465898738bcd0d493d3a12e5aebf462cb67747f26314c0719
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.082220642 +0000 UTC m=+1.770639055 container create efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 systemd[1]: Started libpod-conmon-efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76.scope.
Oct 12 16:58:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 12 16:58:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 12 16:58:46 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 12 16:58:46 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.063279912 +0000 UTC m=+1.751698355 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:58:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b68fc7941df1a27d68c23df5582e9b3a60ae3ece73a3860945d26b502cc600db/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.168706767 +0000 UTC m=+1.857125270 container init efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.176816064 +0000 UTC m=+1.865234497 container start efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.180775981 +0000 UTC m=+1.869194454 container attach efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 confident_cori[97747]: 65534 65534
Oct 12 16:58:46 np0005481680 systemd[1]: libpod-efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76.scope: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.181811206 +0000 UTC m=+1.870229669 container died efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b68fc7941df1a27d68c23df5582e9b3a60ae3ece73a3860945d26b502cc600db-merged.mount: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.23085966 +0000 UTC m=+1.919278123 container remove efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_cori, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 podman[97609]: 2025-10-12 20:58:46.235908162 +0000 UTC m=+1.924326635 volume remove 8c210d503421811465898738bcd0d493d3a12e5aebf462cb67747f26314c0719
Oct 12 16:58:46 np0005481680 systemd[1]: libpod-conmon-efcdce8ae568073c70e56cef5d9e883f0f55d5ea820bd93b7db6bb3400ee4e76.scope: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.333276892 +0000 UTC m=+0.061264892 volume create fec7201a82d11867262eb537ea306d5e1ff856a521d19fdabe97e288f36eea4b
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.344143606 +0000 UTC m=+0.072131576 container create fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 python3[97787]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:58:46 np0005481680 systemd[1]: Started libpod-conmon-fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546.scope.
Oct 12 16:58:46 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 12 16:58:46 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.315635362 +0000 UTC m=+0.043623372 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:58:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2b0443179989d5bffca22fe19e122a8a38b8129cfd2c7aaf0fc04d7c16d13b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:46 np0005481680 podman[97805]: 2025-10-12 20:58:46.43352397 +0000 UTC m=+0.055235444 container create 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.446720002 +0000 UTC m=+0.174708032 container init fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.451546329 +0000 UTC m=+0.179534299 container start fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 modest_allen[97811]: 65534 65534
Oct 12 16:58:46 np0005481680 systemd[1]: libpod-fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546.scope: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.461050651 +0000 UTC m=+0.189038701 container attach fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.46144887 +0000 UTC m=+0.189436870 container died fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 systemd[1]: Started libpod-conmon-5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168.scope.
Oct 12 16:58:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ac2b0443179989d5bffca22fe19e122a8a38b8129cfd2c7aaf0fc04d7c16d13b-merged.mount: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97805]: 2025-10-12 20:58:46.410309316 +0000 UTC m=+0.032020790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.510150185 +0000 UTC m=+0.238138155 container remove fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_allen, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:46 np0005481680 podman[97789]: 2025-10-12 20:58:46.513845385 +0000 UTC m=+0.241833385 volume remove fec7201a82d11867262eb537ea306d5e1ff856a521d19fdabe97e288f36eea4b
Oct 12 16:58:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6b6fe711a4a33742d46fe11f6dbf42566da5323cb3a71c23f7354ed8774fd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6b6fe711a4a33742d46fe11f6dbf42566da5323cb3a71c23f7354ed8774fd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:46 np0005481680 systemd[1]: libpod-conmon-fb9e1bdd0873479e85af3d30b4ad137adc7f924cfe2652bab9e93f5f9c092546.scope: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97805]: 2025-10-12 20:58:46.538372122 +0000 UTC m=+0.160083616 container init 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:58:46 np0005481680 podman[97805]: 2025-10-12 20:58:46.547867623 +0000 UTC m=+0.169579077 container start 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:58:46 np0005481680 podman[97805]: 2025-10-12 20:58:46.551986203 +0000 UTC m=+0.173697727 container attach 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:58:46 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:46 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:46 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:46 np0005481680 recursing_heyrovsky[97832]: could not fetch user info: no user info saved
Oct 12 16:58:46 np0005481680 systemd[1]: libpod-5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168.scope: Deactivated successfully.
Oct 12 16:58:46 np0005481680 podman[97959]: 2025-10-12 20:58:46.908099748 +0000 UTC m=+0.041170972 container died 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:58:46 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:46 np0005481680 podman[97959]: 2025-10-12 20:58:46.952225182 +0000 UTC m=+0.085296346 container remove 5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168 (image=quay.io/ceph/ceph:v19, name=recursing_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:58:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:46 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe058003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:58:47 2025: (VI_0) Entering MASTER STATE
Oct 12 16:58:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7d6b6fe711a4a33742d46fe11f6dbf42566da5323cb3a71c23f7354ed8774fd7-merged.mount: Deactivated successfully.
Oct 12 16:58:47 np0005481680 systemd[1]: libpod-conmon-5d6882038448a61b4847a4658b8fd640f63d78248686a6d3170199265a477168.scope: Deactivated successfully.
Oct 12 16:58:47 np0005481680 systemd[1]: Starting Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:47 np0005481680 python3[98041]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5adb8c35-1b74-5730-a252-62321f654cd5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:58:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 12 16:58:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 12 16:58:47 np0005481680 podman[98071]: 2025-10-12 20:58:47.46872131 +0000 UTC m=+0.070379314 container create 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 16:58:47 np0005481680 systemd[1]: Started libpod-conmon-4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2.scope.
Oct 12 16:58:47 np0005481680 podman[98071]: 2025-10-12 20:58:47.440928724 +0000 UTC m=+0.042586708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct 12 16:58:47 np0005481680 podman[98100]: 2025-10-12 20:58:47.550173652 +0000 UTC m=+0.074581846 volume create b6da8c0369a92b8680304f6a7b8de19c8437a7bb373333a1fe31da1f46a60168
Oct 12 16:58:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1cef9abf1b40e2548c0ed02f6021af0b3446eb34ce1deafb49938e5238c185/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1cef9abf1b40e2548c0ed02f6021af0b3446eb34ce1deafb49938e5238c185/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:47 np0005481680 podman[98100]: 2025-10-12 20:58:47.568458477 +0000 UTC m=+0.092866671 container create 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:47 np0005481680 podman[98071]: 2025-10-12 20:58:47.577701032 +0000 UTC m=+0.179359086 container init 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 12 16:58:47 np0005481680 podman[98071]: 2025-10-12 20:58:47.583704938 +0000 UTC m=+0.185362932 container start 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:58:47 np0005481680 podman[98071]: 2025-10-12 20:58:47.58789386 +0000 UTC m=+0.189551864 container attach 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:58:47 np0005481680 podman[98100]: 2025-10-12 20:58:47.526372063 +0000 UTC m=+0.050780307 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:58:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b729b0a337d3a404cddeed5de9b56a41a487a8a36062dabcce062852ec2ccaa5/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b729b0a337d3a404cddeed5de9b56a41a487a8a36062dabcce062852ec2ccaa5/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:47 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:47 np0005481680 podman[98100]: 2025-10-12 20:58:47.651387624 +0000 UTC m=+0.175795848 container init 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:47 np0005481680 podman[98100]: 2025-10-12 20:58:47.660914037 +0000 UTC m=+0.185322231 container start 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:58:47 np0005481680 bash[98100]: 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9
Oct 12 16:58:47 np0005481680 systemd[1]: Started Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:47 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.706Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.707Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.729Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.732Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.781Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.781Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.790Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 12 16:58:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:47.790Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev cad557e5-07b9-440f-9ad2-b536f83422b5 (Updating alertmanager deployment (+1 -> 1))
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event cad557e5-07b9-440f-9ad2-b536f83422b5 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev d09cafea-c102-4981-8ae5-f94194d6a1eb (Updating grafana deployment (+1 -> 1))
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct 12 16:58:47 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]: {
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "user_id": "openstack",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "display_name": "openstack",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "email": "",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "suspended": 0,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "max_buckets": 1000,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "subusers": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "keys": [
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        {
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:            "user": "openstack",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:            "access_key": "O85MBHU0JHWD27GLFPVH",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:            "secret_key": "BlPDcM1byPjqlR4rFTAAnMy5aMymWbkEAw3fKlbH",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:            "active": true,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:            "create_date": "2025-10-12T20:58:47.893231Z"
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        }
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    ],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "swift_keys": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "caps": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "op_mask": "read, write, delete",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "default_placement": "",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "default_storage_class": "",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "placement_tags": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "bucket_quota": {
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "enabled": false,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "check_on_raw": false,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_size": -1,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_size_kb": 0,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_objects": -1
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    },
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "user_quota": {
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "enabled": false,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "check_on_raw": false,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_size": -1,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_size_kb": 0,
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:        "max_objects": -1
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    },
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "temp_url_keys": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "type": "rgw",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "mfa_ids": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "account_id": "",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "path": "/",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "create_date": "2025-10-12T20:58:47.892563Z",
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "tags": [],
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]:    "group_ids": []
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]: }
Oct 12 16:58:47 np0005481680 upbeat_kilby[98114]: 
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 12 16:58:48 np0005481680 systemd[1]: libpod-4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2.scope: Deactivated successfully.
Oct 12 16:58:48 np0005481680 podman[98071]: 2025-10-12 20:58:48.01161574 +0000 UTC m=+0.613273774 container died 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5a1cef9abf1b40e2548c0ed02f6021af0b3446eb34ce1deafb49938e5238c185-merged.mount: Deactivated successfully.
Oct 12 16:58:48 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct 12 16:58:48 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct 12 16:58:48 np0005481680 podman[98071]: 2025-10-12 20:58:48.06669554 +0000 UTC m=+0.668353534 container remove 4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2 (image=quay.io/ceph/ceph:v19, name=upbeat_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 16:58:48 np0005481680 systemd[1]: libpod-conmon-4213596475b0780f06ee408ab6a34830006004d2cd8671025b0cb939ca12e0a2.scope: Deactivated successfully.
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.17( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.933551788s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251098633s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.17( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.933510780s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251098633s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.790504456s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.108139038s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.790467262s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108139038s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.15( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.790404320s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.108139038s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.15( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.790349007s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108139038s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.16( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.929776192s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.247604370s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.16( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.929728508s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.247604370s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789774895s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.108108521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.13( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.932767868s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251113892s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.15( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789743423s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108108521s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.13( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.932738304s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251113892s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.10( v 57'48 (0'0,57'48] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789604187s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=55'46 lcod 55'47 mlcod 55'47 active pruub 184.108078003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.10( v 57'48 (0'0,57'48] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789556503s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=55'46 lcod 55'47 mlcod 0'0 unknown NOTIFY pruub 184.108078003s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.d( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.769474030s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.088134766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.d( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.769442558s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.088134766s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789357185s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.108093262s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.3( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789335251s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108093262s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.11( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789306641s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.108093262s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.1( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.769068718s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.088073730s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.1( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.769044876s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.088073730s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.e( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789158821s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.108215332s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.e( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.789085388s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108215332s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.11( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.788967133s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108093262s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.9( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787835121s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.107223511s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.9( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787810326s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.107223511s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787849426s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.107299805s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787830353s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.107299805s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.931547165s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251129150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.7( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.768193245s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087829590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.931476593s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251129150s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787643433s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.107299805s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.9( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787566185s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.107299805s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.7( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.768162727s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087829590s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.8( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.785410881s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105316162s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.8( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.785389900s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105316162s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.b( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.785284996s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105300903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.b( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.785261154s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105300903s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.e( v 58'57 (0'0,58'57] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930939674s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 58'56 active pruub 186.251159668s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.e( v 58'57 (0'0,58'57] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930885315s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 0'0 unknown NOTIFY pruub 186.251159668s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.f( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784860611s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105270386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784801483s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.105255127s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.f( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784823418s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105270386s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.d( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784775734s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105255127s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.3( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.767254829s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087707520s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.3( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.767027855s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087707520s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.f( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930549622s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251327515s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784403801s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.105209351s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.784384727s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105209351s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.f( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930524826s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251327515s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.8( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930262566s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251296997s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.8( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.930200577s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251296997s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783983231s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105209351s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.5( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.766394615s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087661743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783954620s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105209351s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.5( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.766364098s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087661743s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.787629128s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.108047485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.a( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783713341s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105224609s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.a( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783683777s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105224609s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.786533356s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.108047485s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.f( v 47'39 (0'0,47'39] local-lis/les=51/53 n=3 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.765808105s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087631226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.f( v 47'39 (0'0,47'39] local-lis/les=51/53 n=3 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.765777588s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087631226s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.3( v 58'57 (0'0,58'57] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.929363251s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 58'56 active pruub 186.251342773s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.3( v 58'57 (0'0,58'57] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.929311752s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 0'0 unknown NOTIFY pruub 186.251342773s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783081055s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.105285645s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.a( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.783025742s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105285645s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.9( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.765091896s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087509155s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.9( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.765062332s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087509155s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.4( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.928644180s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251388550s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.7( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.928570747s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251358032s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.4( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.928616524s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251388550s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.7( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.928550720s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251358032s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.6( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781701088s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105087280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.782031059s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.105133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.5( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781641960s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.105072021s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781394958s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.105010986s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781374931s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105010986s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.b( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781678200s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105133057s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.5( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781606674s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105072021s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.6( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781546593s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.105087280s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.19( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.927160263s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251373291s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.19( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.927131653s) [2] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251373291s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.781569481s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104995728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.780151367s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104995728s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.926143646s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251556396s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.18( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.779369354s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104827881s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1a( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.926113129s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251556396s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.18( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.779220581s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104827881s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1d( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.925918579s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251571655s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1d( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.925888062s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251571655s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1e( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.925476074s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251586914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778717995s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104858398s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.1d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778602600s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104751587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778689384s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104858398s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.1d( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778521538s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104751587s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1e( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.925445557s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251586914s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778127670s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104721069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778272629s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104751587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.13( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777901649s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104660034s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778100014s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104721069s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1c( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.778112411s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104751587s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.13( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777877808s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104660034s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777606964s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104629517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.7( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777581215s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104644775s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.6( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777571678s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104629517s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.7( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777553558s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104644775s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.5( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.927081108s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.254501343s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.5( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.927057266s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.254501343s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.12( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777153969s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104751587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.12( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.777120590s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104751587s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.b( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.759657860s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087448120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[6.b( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=59 pruub=15.759625435s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087448120s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776649475s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104598999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776698112s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104629517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.2( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776629448s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104598999s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.5( v 48'45 (0'0,48'45] local-lis/les=53/54 n=1 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776585579s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104629517s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.12( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.923337936s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251754761s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.12( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.923314095s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251754761s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775979042s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104614258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.923028946s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251708984s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.10( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776011467s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104721069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.11( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775904655s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104614258s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1( v 45'48 (0'0,45'48] local-lis/les=55/56 n=1 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.922989845s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251708984s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.10( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775993347s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104721069s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.14( v 58'57 (0'0,58'57] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.922703743s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 58'56 active pruub 186.251754761s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775585175s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104553223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.14( v 58'57 (0'0,58'57] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.922527313s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=58'57 lcod 58'56 mlcod 0'0 unknown NOTIFY pruub 186.251754761s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775099754s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104537964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.16( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774927139s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104400635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.16( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774904251s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104400635s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775078773s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104537964s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.17( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774660110s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104400635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.3( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.776588440s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 active pruub 184.104598999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.17( v 41'6 (0'0,41'6] local-lis/les=53/54 n=0 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774629593s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104400635s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774651527s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104568481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.16( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.775323868s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104553223s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[9.3( v 41'6 (0'0,41'6] local-lis/les=53/54 n=1 ec=53/40 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774770737s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=41'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104598999s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.18( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774600983s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104568481s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.774035454s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 active pruub 184.104446411s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[8.1f( v 48'45 (0'0,48'45] local-lis/les=53/54 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=8.773934364s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=48'45 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.104446411s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1b( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.922095299s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251785278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1b( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.921025276s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251785278s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1c( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.921052933s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active pruub 186.251785278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[11.1c( v 45'48 (0'0,45'48] local-lis/les=55/56 n=0 ec=55/44 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=10.920805931s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.251785278s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 12 16:58:48 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.10( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.1b( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.18( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.1e( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.f( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.6( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.2( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.3( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.8( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.6( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.e( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.9( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.8( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.b( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.10( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[12.19( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.13( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 59 pg[7.4( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 12 16:58:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 12 16:58:48 np0005481680 python3[98320]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:58:48 np0005481680 ceph-mgr[73901]: [dashboard INFO request] [192.168.122.100:45762] [GET] [200] [0.142s] [6.3K] [bca0bcb6-0c44-4013-b907-eeb0a470ed5c] /
Oct 12 16:58:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:48 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.a( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=59/60 n=1 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=58'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: Regenerating cephadm self-signed grafana TLS certificates
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: Deploying daemon grafana.compute-0 on compute-0
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=59/60 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:49 np0005481680 python3[98387]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:58:49 np0005481680 ceph-mgr[73901]: [dashboard INFO request] [192.168.122.100:45768] [GET] [200] [0.003s] [6.3K] [28631a00-200f-4d75-94ba-e353ed8a44c6] /
Oct 12 16:58:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 12 16:58:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:49 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0400016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:49 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:49.733Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000246231s
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.6( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.665896416s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087829590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.6( v 47'39 (0'0,47'39] local-lis/les=51/53 n=2 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.665856361s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087829590s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.2( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.665381432s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087829590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.2( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.665351868s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087829590s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.e( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.664813995s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087509155s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.e( v 47'39 (0'0,47'39] local-lis/les=51/53 n=1 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.664789200s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087509155s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.a( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.664392471s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 191.087509155s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[6.a( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=61 pruub=13.664214134s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.087509155s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 12 16:58:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 12 16:58:50 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 24 completed events
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:50 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 1e651602-a738-4c0a-a42b-f7eda667c4b5 (Global Recovery Event) in 10 seconds
Oct 12 16:58:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:50 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 12 16:58:51 np0005481680 systemd[1]: packagekit.service: Deactivated successfully.
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 62 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[55,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:58:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Oct 12 16:58:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 12 16:58:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:51 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:51 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0400016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 12 16:58:51 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 12 16:58:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 12 16:58:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 12 16:58:52 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 12 16:58:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:52 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 12 16:58:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 12 16:58:53 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 12 16:58:53 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 64 pg[10.2( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:53 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 64 pg[10.2( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovery_wait+degraded, 1 active+recovering, 311 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 1/226 objects degraded (0.442%); 1.2 KiB/s, 2 keys/s, 28 objects/s recovering
Oct 12 16:58:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:53 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:53 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/226 objects degraded (0.442%), 1 pg degraded (PG_DEGRADED)
Oct 12 16:58:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 12 16:58:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 12 16:58:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 12 16:58:54 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 65 pg[10.2( v 48'1034 (0'0,48'1034] local-lis/les=64/65 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:54 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064003620 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: Health check failed: Degraded data redundancy: 1/226 objects degraded (0.442%), 1 pg degraded (PG_DEGRADED)
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.074592571 +0000 UTC m=+6.279010606 container create e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 systemd[1]: Started libpod-conmon-e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04.scope.
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.04947949 +0000 UTC m=+6.253897545 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:58:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.169922171 +0000 UTC m=+6.374340266 container init e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.180519839 +0000 UTC m=+6.384937904 container start e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.183994843 +0000 UTC m=+6.388412908 container attach e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 wizardly_golick[98595]: 472 0
Oct 12 16:58:55 np0005481680 systemd[1]: libpod-e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04.scope: Deactivated successfully.
Oct 12 16:58:55 np0005481680 conmon[98595]: conmon e516c5731de23c79601e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04.scope/container/memory.events
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.188265777 +0000 UTC m=+6.392683832 container died e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6fc04614bc91389932ca55dc2c06e57e554bf58eb14fa24bfd6b8039f0d65850-merged.mount: Deactivated successfully.
Oct 12 16:58:55 np0005481680 podman[98350]: 2025-10-12 20:58:55.242466066 +0000 UTC m=+6.446884121 container remove e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04 (image=quay.io/ceph/grafana:10.4.0, name=wizardly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 systemd[1]: libpod-conmon-e516c5731de23c79601ee8c7222d227eda0ec2d8a483ac2b2084655260e81a04.scope: Deactivated successfully.
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.358045638 +0000 UTC m=+0.079615467 container create 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 12 16:58:55 np0005481680 systemd[1]: Started libpod-conmon-0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2.scope.
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.325551248 +0000 UTC m=+0.047121097 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 12 16:58:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.463348011 +0000 UTC m=+0.184917840 container init 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.472163105 +0000 UTC m=+0.193732904 container start 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 charming_chatelet[98628]: 472 0
Oct 12 16:58:55 np0005481680 systemd[1]: libpod-0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2.scope: Deactivated successfully.
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.477313631 +0000 UTC m=+0.198883450 container attach 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.47769086 +0000 UTC m=+0.199260679 container died 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9a37cb8e1ee28f906e21f9f4efc2552828f6b73cd8c5d4ae29b96654c5e81169-merged.mount: Deactivated successfully.
Oct 12 16:58:55 np0005481680 podman[98612]: 2025-10-12 20:58:55.512687422 +0000 UTC m=+0.234257221 container remove 0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2 (image=quay.io/ceph/grafana:10.4.0, name=charming_chatelet, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:55 np0005481680 systemd[1]: libpod-conmon-0eb914560b109d7b1076cfe4a97c27590e95eef665f4e782be15182725aaafc2.scope: Deactivated successfully.
Oct 12 16:58:55 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 8 remapped+peering, 16 peering, 1 active+recovery_wait+degraded, 1 active+recovering, 311 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 1/226 objects degraded (0.442%); 1.1 KiB/s, 2 keys/s, 26 objects/s recovering
Oct 12 16:58:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:55 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064003620 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:55 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:55 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:55 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:55 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 25 completed events
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:55 np0005481680 ceph-mgr[73901]: [progress WARNING root] Starting Global Recovery Event,26 pgs not in active + clean state
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 12 16:58:55 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:58:55 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 66 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:58:55 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:55 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:55 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.045343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736045381, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 920, "num_deletes": 251, "total_data_size": 1087865, "memory_usage": 1109856, "flush_reason": "Manual Compaction"}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736056196, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1053798, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6890, "largest_seqno": 7809, "table_properties": {"data_size": 1048727, "index_size": 2467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12711, "raw_average_key_size": 21, "raw_value_size": 1037821, "raw_average_value_size": 1729, "num_data_blocks": 110, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302714, "oldest_key_time": 1760302714, "file_creation_time": 1760302736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10906 microseconds, and 5880 cpu microseconds.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.056247) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1053798 bytes OK
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.056270) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.058498) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.058558) EVENT_LOG_v1 {"time_micros": 1760302736058546, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.058589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1082962, prev total WAL file size 1082962, number of live WAL files 2.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.059458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1029KB)], [20(10MB)]
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736059518, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12097038, "oldest_snapshot_seqno": -1}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2983 keys, 10892397 bytes, temperature: kUnknown
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736135299, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 10892397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10868205, "index_size": 15635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7493, "raw_key_size": 76596, "raw_average_key_size": 25, "raw_value_size": 10809044, "raw_average_value_size": 3623, "num_data_blocks": 686, "num_entries": 2983, "num_filter_entries": 2983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760302736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.135474) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10892397 bytes
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.136851) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.5 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(21.8) write-amplify(10.3) OK, records in: 3507, records dropped: 524 output_compression: NoCompression
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.136867) EVENT_LOG_v1 {"time_micros": 1760302736136860, "job": 6, "event": "compaction_finished", "compaction_time_micros": 75835, "compaction_time_cpu_micros": 19918, "output_level": 6, "num_output_files": 1, "total_output_size": 10892397, "num_input_records": 3507, "num_output_records": 2983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736137112, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302736138582, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.059369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.138660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.138666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.138667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.138669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-20:58:56.138670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 16:58:56 np0005481680 systemd[1]: Starting Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:56 np0005481680 podman[98772]: 2025-10-12 20:58:56.441977824 +0000 UTC m=+0.074343881 container create d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:56 np0005481680 podman[98772]: 2025-10-12 20:58:56.389422675 +0000 UTC m=+0.021788752 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:58:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:56 np0005481680 podman[98772]: 2025-10-12 20:58:56.529564195 +0000 UTC m=+0.161930342 container init d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:56 np0005481680 podman[98772]: 2025-10-12 20:58:56.543595766 +0000 UTC m=+0.175961853 container start d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:58:56 np0005481680 bash[98772]: d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260
Oct 12 16:58:56 np0005481680 systemd[1]: Started Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev d09cafea-c102-4981-8ae5-f94194d6a1eb (Updating grafana deployment (+1 -> 1))
Oct 12 16:58:56 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event d09cafea-c102-4981-8ae5-f94194d6a1eb (Updating grafana deployment (+1 -> 1)) in 9 seconds
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev a000fc3f-59b4-42d0-9304-83e8fbe05480 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:56 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.mcmztx on compute-0
Oct 12 16:58:56 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.mcmztx on compute-0
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742494806Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-12T20:58:56Z
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742715281Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742721931Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742725701Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742729081Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742732511Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742735801Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742739052Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742742492Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742746162Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742749312Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742752542Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742756272Z level=info msg=Target target=[all]
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742761732Z level=info msg="Path Home" path=/usr/share/grafana
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742764962Z level=info msg="Path Data" path=/var/lib/grafana
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742768032Z level=info msg="Path Logs" path=/var/log/grafana
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742771072Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742774182Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=settings t=2025-10-12T20:58:56.742777562Z level=info msg="App mode production"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore t=2025-10-12T20:58:56.743002389Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore t=2025-10-12T20:58:56.743015559Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.748455171Z level=info msg="Starting DB migrations"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.750998833Z level=info msg="Executing migration" id="create migration_log table"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.753265268Z level=info msg="Migration successfully executed" id="create migration_log table" duration=2.266075ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.75620812Z level=info msg="Executing migration" id="create user table"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.757642655Z level=info msg="Migration successfully executed" id="create user table" duration=1.433984ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.760373871Z level=info msg="Executing migration" id="add unique index user.login"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.761752825Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.378874ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.764762528Z level=info msg="Executing migration" id="add unique index user.email"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.766622123Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.859385ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.769708339Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.771311637Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.604139ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.773697776Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.774992636Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.294991ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.777666281Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.782682904Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=5.015843ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.785450671Z level=info msg="Executing migration" id="create user table v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.787434549Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.983238ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.7907131Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.792720018Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.998398ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.795965737Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.797776461Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.811264ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.800720493Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.80146187Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=740.897µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.803759797Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.804791402Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.031525ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.806936824Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.808891802Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.954248ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.811440443Z level=info msg="Executing migration" id="Update user table charset"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.811495944Z level=info msg="Migration successfully executed" id="Update user table charset" duration=56.671µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.814234141Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.816505567Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.271266ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.818932226Z level=info msg="Executing migration" id="Add missing user data"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.819647453Z level=info msg="Migration successfully executed" id="Add missing user data" duration=714.767µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.823154739Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.826196033Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=3.039984ms
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.829020941Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.830798954Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.777193ms
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.834163087Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.836550264Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.375097ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.83884143Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct 12 16:58:56 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.855383053Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=16.540023ms
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 67 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=62/55 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[55,66)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.858480198Z level=info msg="Executing migration" id="Add uid column to user"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.860783215Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.305806ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.862907566Z level=info msg="Executing migration" id="Update uid column values for users"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.863315336Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=408.28µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.866392681Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.867843756Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.456165ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.871993617Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.873612547Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.615029ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.87665028Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.878211078Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.559468ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.880833222Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.882247117Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.412654ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.884807319Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.886269334Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.462265ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.888897049Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.890510318Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.612709ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.893611503Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.893671964Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=62.691µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.896405751Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.897779184Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.372083ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.901010373Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.902422587Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.409004ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.904547369Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.905882802Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.333243ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.90826997Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.909570622Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.300102ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.912332299Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.918195952Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.862893ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.920842296Z level=info msg="Executing migration" id="create temp_user v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.922389444Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.546288ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.924582026Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.926006841Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.424125ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.928245946Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.929549918Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.303282ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.932798577Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.934195541Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.396624ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.936291572Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.937585764Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.296321ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.940845183Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.94156943Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=723.808µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.94360997Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.944752188Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.140058ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.947153645Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.947821972Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=667.887µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.950785344Z level=info msg="Executing migration" id="create star table"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.951949793Z level=info msg="Migration successfully executed" id="create star table" duration=1.164119ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.955327825Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.956849322Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.520597ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.959727031Z level=info msg="Executing migration" id="create org table v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.961216428Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.489267ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.963913563Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.965392369Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.469106ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.968551747Z level=info msg="Executing migration" id="create org_user table v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.969789197Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.23696ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:56 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0340016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.973322953Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.974720207Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.396714ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.977625517Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.978928539Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.302142ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.981765858Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.983154552Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.388174ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.986856432Z level=info msg="Executing migration" id="Update org table charset"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.986898233Z level=info msg="Migration successfully executed" id="Update org table charset" duration=43.531µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.989298671Z level=info msg="Executing migration" id="Update org_user table charset"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.989337473Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=40.222µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.991782212Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.992138591Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=356.04µs
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.994339244Z level=info msg="Executing migration" id="create dashboard table"
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.995666876Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.327382ms
Oct 12 16:58:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:56.998751421Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.000380261Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.62832ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.003439736Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.004936582Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.496026ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.00816474Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.009343259Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.178249ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.012405754Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.013689805Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.283081ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.016406461Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.017676992Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.269931ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.019947027Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.029002668Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.054351ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.031299483Z level=info msg="Executing migration" id="create dashboard v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.032599665Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.299802ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.034823009Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.036210293Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.383644ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.039346729Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.040843965Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.495386ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.04350867Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.044215388Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=706.888µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.04636273Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.048453521Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=2.089981ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.050578272Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.050673405Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=96.722µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.053178426Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:57 np0005481680 ceph-mon[73608]: Deploying daemon haproxy.rgw.default.compute-0.mcmztx on compute-0
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.054894147Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.719052ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.057689575Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.059204712Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.514997ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.061254622Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.062638456Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.382544ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.065032665Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.066317115Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.28702ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.068619901Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.070738833Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.118382ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.073147251Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.074022743Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=877.042µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.076157215Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.077012156Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=854.63µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.079823874Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.079851615Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.811µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.082241203Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.082265384Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=24.691µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.08416648Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.086186909Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.01992ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.088604647Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.090513984Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.908067ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.092181835Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.094090841Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.909086ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.095891406Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.097846603Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.954808ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.100455046Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.100722573Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=268.927µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.103573232Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.104647758Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.073636ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.107318093Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.108200224Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=881.821µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.110238954Z level=info msg="Executing migration" id="Update dashboard title length"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.110262105Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.031µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.112151401Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.112957071Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=805.79µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.115659366Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.116535657Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=876.281µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.120731959Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.127264478Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.532209ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.130417375Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.131185384Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=769.089µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.134220728Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.135090889Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=869.751µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.13721521Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.138178734Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=963.464µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.140364297Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.140716546Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=353.969µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.143179145Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.143823232Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=642.066µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.146148869Z level=info msg="Executing migration" id="Add check_sum column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.148531446Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.382308ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.150557866Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.151507848Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=949.692µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.153541428Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.153741693Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=200.735µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.155589808Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.155780412Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=190.335µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.157533465Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.158412086Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=877.851µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.162107786Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.164367682Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.260146ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.166046412Z level=info msg="Executing migration" id="create data_source table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.167040196Z level=info msg="Migration successfully executed" id="create data_source table" duration=991.804µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.169296381Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.170232044Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=936.633µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.172517119Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.173398471Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=879.182µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.175481241Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.176402204Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=920.903µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.178053284Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.178921756Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=867.971µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.180993405Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.187032073Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.037618ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.188734244Z level=info msg="Executing migration" id="create data_source table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.189724689Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=990.205µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.19141881Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.192329571Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=910.431µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.19388081Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.194764661Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=882.821µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.197016796Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.197695542Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=678.516µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.199373674Z level=info msg="Executing migration" id="Add column with_credentials"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.202561281Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.184497ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.204340454Z level=info msg="Executing migration" id="Add secure json data column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.206664061Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.323537ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.208552206Z level=info msg="Executing migration" id="Update data_source table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.208596567Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=46.081µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.210248007Z level=info msg="Executing migration" id="Update initial version to 1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.210398241Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=150.424µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.211729313Z level=info msg="Executing migration" id="Add read_only data column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.213442086Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.712623ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.215230729Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.215381133Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=148.654µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.216979581Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.217133135Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=151.764µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.220369044Z level=info msg="Executing migration" id="Add uid column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.221994484Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.62476ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.223365287Z level=info msg="Executing migration" id="Update uid value"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.223510601Z level=info msg="Migration successfully executed" id="Update uid value" duration=145.424µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.224928395Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.22555736Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=628.825µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.227099967Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.227681672Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=581.315µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.230696175Z level=info msg="Executing migration" id="create api_key table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.23133487Z level=info msg="Migration successfully executed" id="create api_key table" duration=637.765µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.233529954Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.234146269Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=616.285µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.23623092Z level=info msg="Executing migration" id="add index api_key.key"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.236816184Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=585.134µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.239162601Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.239816487Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=653.466µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.241912698Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.242552763Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=639.745µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.244321287Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.244936342Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=616.385µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.246548811Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.247207577Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=658.346µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.248710314Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.253309766Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.598091ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.254882564Z level=info msg="Executing migration" id="create api_key table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.255495209Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=612.615µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.257046897Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.25804975Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.003273ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.25964203Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.260303386Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=659.316µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.261890494Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.262600292Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=707.898µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.264953298Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.265256047Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=302.649µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.266673811Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.267159403Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=484.921µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.26870632Z level=info msg="Executing migration" id="Update api_key table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.268729531Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.211µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.270437832Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.272267877Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.829135ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.274118322Z level=info msg="Executing migration" id="Add service account foreign key"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.276041818Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.923406ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.277771561Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.277981076Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=209.935µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.279706548Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.281658655Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.948527ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.283342526Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.285857427Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.515281ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.287747544Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.288655735Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=908.061µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.290274805Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.290965472Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=690.188µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.293018671Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.294143669Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.124918ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.296744962Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.297678105Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=934.863µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.300049142Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.301053407Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.003415ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.303347823Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.304502422Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.156148ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.307938305Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.308009496Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=72.211µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.309910832Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.309936523Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=26.451µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.311612484Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.314531825Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.919001ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.316054332Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.319143387Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.088465ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.32127897Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.321346601Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=68.201µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.323257018Z level=info msg="Executing migration" id="create quota table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.324134149Z level=info msg="Migration successfully executed" id="create quota table v1" duration=877.222µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.326554817Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.327494091Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=938.844µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.330146595Z level=info msg="Executing migration" id="Update quota table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.330175166Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.521µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.331850847Z level=info msg="Executing migration" id="create plugin_setting table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.332737268Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=888.172µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.335214059Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.336189962Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=975.283µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.338628072Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.342445215Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.814833ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.34433337Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.344360881Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.141µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.346664147Z level=info msg="Executing migration" id="create session table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.347371834Z level=info msg="Migration successfully executed" id="create session table" duration=707.547µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.349736112Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.349809024Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=73.592µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.352738235Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.352807036Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=69.131µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.35461622Z level=info msg="Executing migration" id="create playlist table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.355186295Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=570.205µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.358921695Z level=info msg="Executing migration" id="create playlist item table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.35949929Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=577.305µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.363251861Z level=info msg="Executing migration" id="Update playlist table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.363270582Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=18.231µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.364901831Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.364919992Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=18.79µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.366759536Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.367387902 +0000 UTC m=+0.042264110 container create f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.369020741Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.260785ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.371777748Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.37392177Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.142432ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.376727108Z level=info msg="Executing migration" id="drop preferences table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.37680255Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=75.422µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.378642506Z level=info msg="Executing migration" id="drop preferences table v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.378739008Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=99.172µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.380519901Z level=info msg="Executing migration" id="create preferences table v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.381423983Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=903.612µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.386368683Z level=info msg="Executing migration" id="Update preferences table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.386394494Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.781µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.388288659Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.3915537Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.266291ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.393476786Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.39363878Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=162.594µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.39651141Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.399812291Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.300541ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.401502962Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.404817982Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.31429ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.406514313Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.406579966Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=66.312µs
Oct 12 16:58:57 np0005481680 systemd[1]: Started libpod-conmon-f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f.scope.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.409937507Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.410936991Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=999.314µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.413919974Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.414942388Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.021964ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.417329926Z level=info msg="Executing migration" id="create alert table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.419049318Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.719462ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.421741424Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.422838981Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.098056ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.426560001Z level=info msg="Executing migration" id="add index alert state"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.427475263Z level=info msg="Migration successfully executed" id="add index alert state" duration=914.942µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.429939493Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.430882726Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=942.793µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.433174933Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.433935191Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=759.927µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.436457302Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.437442376Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=984.324µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.440207594Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.441156646Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=948.652µs
Oct 12 16:58:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.444816586Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct 12 16:58:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.350805848 +0000 UTC m=+0.025682076 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.454966292Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.139126ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.456884559Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.457707899Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=823.12µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.459869442Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.460798654Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=928.942µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.463366047Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.463733905Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=367.798µs
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.465416037 +0000 UTC m=+0.140292265 container init f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.465512109Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.466253087Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=740.808µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.468107932Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.469005534Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=897.392µs
Oct 12 16:58:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.471182247Z level=info msg="Executing migration" id="Add column is_default"
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.472868738 +0000 UTC m=+0.147744946 container start f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.475230976Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.047899ms
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.477651514 +0000 UTC m=+0.152527752 container attach f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.477856859Z level=info msg="Executing migration" id="Add column frequency"
Oct 12 16:58:57 np0005481680 awesome_bassi[98915]: 0 0
Oct 12 16:58:57 np0005481680 systemd[1]: libpod-f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f.scope: Deactivated successfully.
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.480818242 +0000 UTC m=+0.155694460 container died f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.481912658Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.053059ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.48525309Z level=info msg="Executing migration" id="Add column send_reminder"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.489335309Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.082169ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.491391639Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.494845803Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.451664ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.496607536Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.497480366Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=874.72µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.499915896Z level=info msg="Executing migration" id="Update alert table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.499940817Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=25.891µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.501812773Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.501836833Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=24.88µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.505126723Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.505890011Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=763.408µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.508418243Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.509380766Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=962.033µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.512310628Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.513252591Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=941.403µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.515326381Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct 12 16:58:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c76259308c3cb6341e730fba7ae2e2b0e7ed9809b172e24fd4576acc36a450be-merged.mount: Deactivated successfully.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.516219753Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=890.952µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.520401375Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.521468521Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.065786ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.524861333Z level=info msg="Executing migration" id="Add for to alert table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.528452171Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.590598ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.532877118Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct 12 16:58:57 np0005481680 podman[98899]: 2025-10-12 20:58:57.534958709 +0000 UTC m=+0.209834967 container remove f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f (image=quay.io/ceph/haproxy:2.3, name=awesome_bassi)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.536770772Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.892814ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.53871326Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.538912355Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=199.295µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.540871932Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.541815396Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=943.244µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.544287976Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.545234659Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=946.253µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.546981652Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.550835316Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.853224ms
Oct 12 16:58:57 np0005481680 systemd[1]: libpod-conmon-f471babb7f82e58feb294c692fa63097694788958c85243f601cda2bafc9352f.scope: Deactivated successfully.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.552386343Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.552453735Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=67.712µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.554944325Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.55595868Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.013334ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.561510245Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.562569891Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.061896ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.565534953Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.565636245Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=101.482µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.567427688Z level=info msg="Executing migration" id="create annotation table v5"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.568402443Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=974.734µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.570948765Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.571907868Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=960.473µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.574320006Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.575362962Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.042396ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.57775239Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.578687662Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=935.132µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.58104407Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.582162588Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.117467ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.587754314Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.58883349Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.078746ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.591261378Z level=info msg="Executing migration" id="Update annotation table charset"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.591291069Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.151µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.593205696Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.597496801Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.290725ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.600041713Z level=info msg="Executing migration" id="Drop category_id index"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.601048867Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.007283ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.602613456Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.606674823Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.060278ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.608479548Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.609258696Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=778.678µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.610946807Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.611934362Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=987.135µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.614462533Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.615472608Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.010305ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.617802234Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct 12 16:58:57 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.629268444Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.46538ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.631099668Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.631970239Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=870.531µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.633844195Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.634808479Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=964.444µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.637567895Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.637912014Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=346.329µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.639642496Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct 12 16:58:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 7 peering, 330 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 191 B/s, 2 keys/s, 9 objects/s recovering
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.640316642Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=674.426µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.642001434Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.642213479Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=212.175µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.643786737Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:57 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064003620 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.647892757Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.1056ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.649588278Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.65376819Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.181322ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.655755258Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.656719581Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=963.583µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.658398322Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.659364436Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=965.374µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.662511422Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.66279661Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=286.108µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.664659505Z level=info msg="Executing migration" id="Add epoch_end column"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.668844387Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.184692ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.670657471Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.671652175Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=992.504µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.673965451Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.674166206Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=201.135µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.675992021Z level=info msg="Executing migration" id="Move region to single row"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.676359409Z level=info msg="Migration successfully executed" id="Move region to single row" duration=367.398µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.678218375Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.679139577Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=920.772µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.680773877Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.681675329Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=901.062µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.68335783Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.684279212Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=919.182µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.690264268Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.691251962Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=988.934µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.693331632Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.694254355Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=922.773µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.696359736Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.697250858Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=891.172µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.699306058Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.69937206Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=67.212µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:57 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064003620 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.702498406Z level=info msg="Executing migration" id="create test_data table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.703414768Z level=info msg="Migration successfully executed" id="create test_data table" duration=915.782µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.706055272Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.706899033Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=843.801µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.709411204Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.710290425Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=879.181µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.712834727Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct 12 16:58:57 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.716558078Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=3.717631ms
Oct 12 16:58:57 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.719326035Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.71955396Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=228.225µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.721333283Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.721740144Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=406.971µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.72320281Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.723265191Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=63.031µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.725128826Z level=info msg="Executing migration" id="create team table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.725970997Z level=info msg="Migration successfully executed" id="create team table" duration=841.911µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.728297893Z level=info msg="Executing migration" id="add index team.org_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.729356969Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.056136ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.731783078Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.732720671Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=937.252µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.734945295Z level=info msg="Executing migration" id="Add column uid in team"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:58:57.735Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002231771s
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.743008061Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.056406ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.745193014Z level=info msg="Executing migration" id="Update uid column values in team"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.745498452Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=309.458µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.747586333Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.749339915Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.752812ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.752883782Z level=info msg="Executing migration" id="create team member table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.754368708Z level=info msg="Migration successfully executed" id="create team member table" duration=1.484486ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.75690573Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.75857005Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.66414ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.761418249Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.763196572Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.64028ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.766101303Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.767849806Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.744953ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.770926431Z level=info msg="Executing migration" id="Add column email to team table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.779558531Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.63139ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.781743463Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.790249981Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=8.506468ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.792340242Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.800626843Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=8.282432ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.802892129Z level=info msg="Executing migration" id="create dashboard acl table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.804831436Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.938846ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.807921751Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.809988781Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.06486ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.813964797Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.815961507Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.996249ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.818747194Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.820618309Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.870185ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.825419027Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.827163349Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.743182ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.830304165Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.832030687Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.726132ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.837961612Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.839754055Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.791923ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.842736368Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.844542001Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.805373ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.847238137Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.848261682Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.023035ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.850950748Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.851383838Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=435.32µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.853755266Z level=info msg="Executing migration" id="create tag table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.855490908Z level=info msg="Migration successfully executed" id="create tag table" duration=1.734652ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.858521432Z level=info msg="Executing migration" id="add index tag.key_value"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.860334827Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.813104ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.864095317Z level=info msg="Executing migration" id="create login attempt table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.865485901Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.391544ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.86870626Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.871322573Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.615463ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.879814891Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.881614874Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.801334ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.884226358Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.90282948Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=18.599582ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.904729816Z level=info msg="Executing migration" id="create login_attempt v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.905448054Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=715.098µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.907238957Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.907910714Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=669.077µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.910276841Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.910524307Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=247.316µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.913024058Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.913567382Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=543.604µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.91555152Z level=info msg="Executing migration" id="create user auth table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.916131353Z level=info msg="Migration successfully executed" id="create user auth table" duration=579.993µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.917839166Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.918533232Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=693.876µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.921142776Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.921190557Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=48.351µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.923502123Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.927309885Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.807442ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.928942516Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct 12 16:58:57 np0005481680 systemd[1]: Reloading.
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.93242724Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.484894ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.934015979Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.937432973Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.415053ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.939046212Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.942516596Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.470473ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.944183846Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.944844353Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=660.597µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.947261392Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.951001052Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.73877ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.954053837Z level=info msg="Executing migration" id="create server_lock table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.954681892Z level=info msg="Migration successfully executed" id="create server_lock table" duration=627.855µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.957197253Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.957857649Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=660.586µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.960393881Z level=info msg="Executing migration" id="create user auth token table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.961041757Z level=info msg="Migration successfully executed" id="create user auth token table" duration=648.006µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.963184609Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.963863476Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=678.147µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.966498199Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.967208797Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=710.228µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.969579505Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.970372943Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=794.408µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.972655789Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.976531724Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.875695ms
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.978214195Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.978989823Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=775.718µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.981239448Z level=info msg="Executing migration" id="create cache_data table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.981899644Z level=info msg="Migration successfully executed" id="create cache_data table" duration=657.906µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.984230431Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.984918408Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=687.747µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.988020763Z level=info msg="Executing migration" id="create short_url table v1"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.988987216Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=967.363µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.99161097Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.99239962Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=788.45µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.994646375Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.994705986Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=59.931µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.996330166Z level=info msg="Executing migration" id="delete alert_definition table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.996408787Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=78.722µs
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.998022807Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct 12 16:58:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:57.998889718Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=866.141µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.001722867Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.002554527Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=831.68µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.005185301Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.006033282Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=847.431µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.010945791Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.011012702Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=69.061µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.012644072Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.013428591Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=784.589µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.015072801Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.015799379Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=726.638µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.017931401Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.01869498Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=761.639µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.020497003Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.021277353Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=780.35µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.02280115Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.027138105Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.337075ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.028937389Z level=info msg="Executing migration" id="drop alert_definition table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.02980122Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=863.341µs
Oct 12 16:58:58 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.0318675Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.031952052Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=84.882µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.034054944Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.034811791Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=756.598µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.037005005Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.037752553Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=747.488µs
Oct 12 16:58:58 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.043938784Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.044722873Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=784.019µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.048844853Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.048891684Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=47.371µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.050731769Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.051669792Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=938.013µs
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/226 objects degraded (0.442%), 1 pg degraded)
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.055760351Z level=info msg="Executing migration" id="create alert_instance table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.05650695Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=747.009µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.064549115Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.065480928Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=931.483µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.07135116Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.072250883Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=899.643µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.07624994Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.080927484Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.677583ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.0832357Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.084163102Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=925.732µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.085738691Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.086471229Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=732.367µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.090093307Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.112313188Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=22.219741ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.114128772Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.136137838Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.003125ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.137981722Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.138920945Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=940.063µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.140963055Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.141753804Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=789.99µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.146348186Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.150316793Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.968066ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.152446984Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.156360969Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.913515ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.159197348Z level=info msg="Executing migration" id="create alert_rule table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.159935707Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=738.099µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.163635626Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.164475307Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=839.871µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.167864009Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.168659849Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=795.71µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.171692842Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.172551814Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=858.542µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.175128786Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.175193457Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=64.871µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.17693316Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.181165143Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.230253ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.183377196Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.187610429Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.233283ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.189986887Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.194147169Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.159772ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.195738657Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.196587568Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=849.201µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.201739543Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.202646115Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=906.212µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.207127184Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.211269635Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.141871ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.2163856Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.222348215Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.960555ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.224191389Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.225421549Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.22774ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.227964721Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.233957577Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.992086ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.235784272Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.241902741Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.117769ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.243743785Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.243844337Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=100.992µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.246103803Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.247455156Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.351013ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.251915444Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct 12 16:58:58 np0005481680 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.mcmztx for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.253262937Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.347173ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.255936872Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.257313486Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.376154ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.259830717Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.259919739Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=89.412µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.261787334Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.268206141Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.415537ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.270047525Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.276442721Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.394336ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.278349227Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.284332693Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.982716ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.286312761Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.292179484Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.866343ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.294217333Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.300183889Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.966546ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.302365802Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.302839934Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=474.952µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.304872273Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.305764615Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=892.042µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.308884921Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.314851316Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.965824ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.316765513Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.316861865Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=96.862µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.318905534Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.324947401Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.041197ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.326786876Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.327910764Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.123568ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.330319412Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.336444002Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.124059ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.338146043Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.338995503Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=849.17µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.341535175Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.342685353Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.149988ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.345468861Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.351512957Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.043316ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.353393524Z level=info msg="Executing migration" id="create provenance_type table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.354257624Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=863.12µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.356796066Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.357917884Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.121468ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.361300656Z level=info msg="Executing migration" id="create alert_image table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.362185397Z level=info msg="Migration successfully executed" id="create alert_image table" duration=884.621µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.364449683Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.365547489Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.095656ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.369000603Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.369125136Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=124.963µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.371026422Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.372045747Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.019045ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.374486487Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.375647195Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.160058ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.377545991Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.377983542Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.380023642Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.380605986Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=580.164µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.383427144Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.384571232Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.144128ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.386282774Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.392546697Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.261232ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.394559045Z level=info msg="Executing migration" id="create library_element table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.395731774Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.172869ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.398270796Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.399474055Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.202789ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.402792616Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.403703528Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=911.092µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.405955652Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.407215913Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.259761ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.409606112Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.4107735Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.166798ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.413109146Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.413137907Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.151µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.415107745Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.415202888Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=95.523µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.417117264Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.417460102Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=342.918µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.419444941Z level=info msg="Executing migration" id="create data_keys table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.420488487Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.043365ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.423804977Z level=info msg="Executing migration" id="create secrets table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.424711269Z level=info msg="Migration successfully executed" id="create secrets table" duration=908.112µs
Oct 12 16:58:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.428345357Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct 12 16:58:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.461997386Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.648759ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.464397695Z level=info msg="Executing migration" id="add name column into data_keys"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.469296404Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.896059ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.470949504Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.471106848Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=157.304µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.472717227Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.49791832Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.200783ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.505501964Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct 12 16:58:58 np0005481680 podman[99063]: 2025-10-12 20:58:58.509200745 +0000 UTC m=+0.055594994 container create 3fc9175e9865e4a6e3a33ec92803e3b089cb72d94bef34d02edfdb54b2904a75 (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-rgw-default-compute-0-mcmztx)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.531675802Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=26.171058ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.533394343Z level=info msg="Executing migration" id="create kv_store table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.534121241Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=726.938µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.536848798Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.537708108Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=858.69µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.540270811Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.540466735Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=197.534µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.544318309Z level=info msg="Executing migration" id="create permission table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.545034907Z level=info msg="Migration successfully executed" id="create permission table" duration=716.378µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.54724129Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.5480436Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=801.87µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.550240363Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.551092414Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=853.971µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.555386449Z level=info msg="Executing migration" id="create role table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.556087216Z level=info msg="Migration successfully executed" id="create role table" duration=698.757µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.55831792Z level=info msg="Executing migration" id="add column display_name"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.563583268Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.265058ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.565132586Z level=info msg="Executing migration" id="add column group_name"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.570090687Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.957801ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.571913701Z level=info msg="Executing migration" id="add index role.org_id"
Oct 12 16:58:58 np0005481680 podman[99063]: 2025-10-12 20:58:58.479152614 +0000 UTC m=+0.025546913 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.572749191Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=835.3µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.575095408Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.575949279Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=853.661µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.578407468Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.57926383Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=855.712µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.5825978Z level=info msg="Executing migration" id="create team role table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.583284028Z level=info msg="Migration successfully executed" id="create team role table" duration=686.017µs
Oct 12 16:58:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83843208203a59619aac83a3ab9d861bb180f26f73300033b5f632df3e3f84c1/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.585635335Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.586587888Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=952.313µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.589049118Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.58996656Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=916.592µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.5924363Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.593293151Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=858.641µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.598311963Z level=info msg="Executing migration" id="create user role table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.598977459Z level=info msg="Migration successfully executed" id="create user role table" duration=665.756µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.601141182Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.601987532Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=847.64µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.604530364Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.605381955Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=852.851µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.609456455Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.610401487Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=944.462µs
Oct 12 16:58:58 np0005481680 podman[99063]: 2025-10-12 20:58:58.611004222 +0000 UTC m=+0.157398511 container init 3fc9175e9865e4a6e3a33ec92803e3b089cb72d94bef34d02edfdb54b2904a75 (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-rgw-default-compute-0-mcmztx)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.612753425Z level=info msg="Executing migration" id="create builtin role table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.613461431Z level=info msg="Migration successfully executed" id="create builtin role table" duration=707.597µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.616006553Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.617468159Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.461726ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.620100133Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct 12 16:58:58 np0005481680 podman[99063]: 2025-10-12 20:58:58.620587716 +0000 UTC m=+0.166981955 container start 3fc9175e9865e4a6e3a33ec92803e3b089cb72d94bef34d02edfdb54b2904a75 (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-rgw-default-compute-0-mcmztx)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.621379814Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.279321ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.624735467Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct 12 16:58:58 np0005481680 bash[99063]: 3fc9175e9865e4a6e3a33ec92803e3b089cb72d94bef34d02edfdb54b2904a75
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.633880829Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.142491ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.637905157Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct 12 16:58:58 np0005481680 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.mcmztx for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.638972983Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.067766ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.642326394Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.643223056Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=896.172µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-rgw-default-compute-0-mcmztx[99079]: [NOTICE] 284/205858 (2) : New worker #1 (4) forked
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.646913905Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.647839808Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=925.753µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.64995182Z level=info msg="Executing migration" id="add unique index role.uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.650840091Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=888.081µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.652419939Z level=info msg="Executing migration" id="create seed assignment table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.653089306Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=668.627µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.657545574Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.658640051Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.094207ms
Oct 12 16:58:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.661036109Z level=info msg="Executing migration" id="add column hidden to role table"
Oct 12 16:58:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000048s ======
Oct 12 16:58:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:58:58.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.667303362Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.266823ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.669930056Z level=info msg="Executing migration" id="permission kind migration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.675579553Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.649087ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.677268454Z level=info msg="Executing migration" id="permission attribute migration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.682884571Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.616117ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.686176791Z level=info msg="Executing migration" id="permission identifier migration"
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.692081415Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.904094ms
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.695031387Z level=info msg="Executing migration" id="add permission identifier index"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.695932188Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=900.681µs
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.698865259Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.699826803Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=961.264µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.704102747Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.704899837Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=797.53µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.709364475Z level=info msg="Executing migration" id="create query_history table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.710514284Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.149688ms
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.713512316Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.714667605Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.153249ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.718889937Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.718966789Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=80.592µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.721269165Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.721327977Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=59.572µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.723263194Z level=info msg="Executing migration" id="teams permissions migration"
Oct 12 16:58:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.723681434Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=418.399µs
Oct 12 16:58:58 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.jdbfxi on compute-2
Oct 12 16:58:58 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.jdbfxi on compute-2
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.729522836Z level=info msg="Executing migration" id="dashboard permissions"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.73009334Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=571.235µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.735135672Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.735813649Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=678.797µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.738282219Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.738494884Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=212.735µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.740294808Z level=info msg="Executing migration" id="alerting notification permissions"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.74079382Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=501.022µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.744762266Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.745619508Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=856.942µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.747970455Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.749142893Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.171478ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.75146056Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.759553577Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.093447ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.765833609Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.766165768Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=329.729µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.769238223Z level=info msg="Executing migration" id="create correlation table v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.770555255Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.316113ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.775110855Z level=info msg="Executing migration" id="add index correlations.uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.776478669Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.366583ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.780316882Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.781749927Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.435115ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.78599291Z level=info msg="Executing migration" id="add correlation config column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.795428839Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.434869ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.798451504Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.799875398Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.423685ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.803554047Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.80491794Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.363653ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.809594005Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.832585604Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.989199ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.834797788Z level=info msg="Executing migration" id="create correlation v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.836038417Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.240269ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.837934134Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.839154714Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.21792ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.843301185Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.844589806Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.288271ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.847045235Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.848258275Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.21271ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.850751956Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.851025862Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=273.776µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.853024791Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.854011895Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=986.704µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.856137497Z level=info msg="Executing migration" id="add provisioning column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.863911976Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.773218ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.866198612Z level=info msg="Executing migration" id="create entity_events table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.867104394Z level=info msg="Migration successfully executed" id="create entity_events table" duration=905.272µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.868913538Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.870034875Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.121017ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.873805836Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.874319329Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.876190575Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.876679216Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.878669255Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.87967535Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.005985ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.881469153Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.88257015Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.100387ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.885168784Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.886372482Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.203608ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.891014586Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.891900927Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=884.131µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.89574425Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.896604892Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=860.332µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.898035527Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.898907407Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=870.05µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.900855335Z level=info msg="Executing migration" id="Drop public config table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.901576282Z level=info msg="Migration successfully executed" id="Drop public config table" duration=722.107µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.903649093Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.904580625Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=931.622µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.906640996Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.907692092Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.048905ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.909656329Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.910579672Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=924.103µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.912292113Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.913147184Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=854.341µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.915699066Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.936129613Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=20.425537ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.938154252Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.944435705Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.281513ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.946206878Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.952006179Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.799131ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.953856795Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.954040489Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=183.004µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.95614767Z level=info msg="Executing migration" id="add share column"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.962149817Z level=info msg="Migration successfully executed" id="add share column" duration=6.001387ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.964141135Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.964297079Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=154.504µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.969296381Z level=info msg="Executing migration" id="create file table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.970028968Z level=info msg="Migration successfully executed" id="create file table" duration=732.427µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.972545489Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.97336916Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=823.341µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:58 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.975648485Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.976453015Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=804.14µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.980239066Z level=info msg="Executing migration" id="create file_meta table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.980827001Z level=info msg="Migration successfully executed" id="create file_meta table" duration=587.905µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.983048824Z level=info msg="Executing migration" id="file table idx: path key"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.983861315Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=812.551µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.986008637Z level=info msg="Executing migration" id="set path collation in file table"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.986088419Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=81.862µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.988547298Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.98860664Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=59.432µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.990755902Z level=info msg="Executing migration" id="managed permissions migration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.991292655Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=536.863µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.9931121Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.993269374Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=157.494µs
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.995209911Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.996487382Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.277821ms
Oct 12 16:58:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:58.998091271Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.00510098Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=7.009159ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.006864885Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.006991719Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=127.094µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.008875607Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.009879293Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.002976ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.012351706Z level=info msg="Executing migration" id="update group index for alert rules"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.012663105Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=311.879µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.014346678Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.014534543Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=187.495µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.016521883Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.016872043Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=349.94µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.018683219Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.025083723Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.400404ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.026807549Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.033344616Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.535528ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.035189074Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.036097827Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=909.293µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.040672285Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/226 objects degraded (0.442%), 1 pg degraded)
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: Cluster is now healthy
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: Deploying daemon haproxy.rgw.default.compute-2.jdbfxi on compute-2
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.116187637Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.510052ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.118403443Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.119472051Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.069358ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.121142884Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.122116979Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=970.525µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.124616944Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.145685995Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.067642ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.148205019Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.15443149Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.225951ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.155974579Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.156214725Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=237.516µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.15797585Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.158149566Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=173.276µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.159854799Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.160017064Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=162.404µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.161693446Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.16184856Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=154.314µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.163704798Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.163871493Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=167.865µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.165679089Z level=info msg="Executing migration" id="create folder table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.166483509Z level=info msg="Migration successfully executed" id="create folder table" duration=805.67µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.168397569Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.169438556Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.040677ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.17195957Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.172825303Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=865.373µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.178374855Z level=info msg="Executing migration" id="Update folder title length"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.178400256Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.591µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.180789288Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.181898875Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.108797ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.185748395Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.186769111Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.020386ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.188596218Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.189746398Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.14742ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.192537399Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.193009231Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=471.572µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.194859329Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.195130666Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=270.347µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.19684135Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.197882917Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.041137ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.199746625Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.200818602Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.074077ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.202606798Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.203683826Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.077458ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.205435211Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.206285403Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=849.892µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.207908105Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.208798418Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=888.703µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.210406149Z level=info msg="Executing migration" id="create anon_device table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.211130387Z level=info msg="Migration successfully executed" id="create anon_device table" duration=722.648µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.212976635Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.21396424Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=987.345µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.216513086Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.217640315Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.127199ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.220409066Z level=info msg="Executing migration" id="create signing_key table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.22130903Z level=info msg="Migration successfully executed" id="create signing_key table" duration=899.774µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.227050697Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.228000081Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=951.594µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.230486515Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.231419339Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=932.944µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.233108853Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.233310208Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=201.825µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.23497204Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.24155207Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.57842ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.243173121Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.243756887Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=584.396µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.245300116Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.246172938Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=872.392µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.248482718Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.249452023Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=968.245µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.250918211Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.251809953Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=891.452µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.253722353Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.254968385Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.245652ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.256759131Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.257651324Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=892.493µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.259383618Z level=info msg="Executing migration" id="create sso_setting table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.260305881Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=922.693µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.262720624Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.263478474Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=758.37µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.265094255Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.265370052Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=275.987µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.26758696Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.267663491Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=77.022µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.271298884Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.278285444Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.98614ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.279781532Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.286631719Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.847156ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.288476227Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.288778814Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=303.117µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=migrator t=2025-10-12T20:58:59.290417116Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.539495055s
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore t=2025-10-12T20:58:59.291655218Z level=info msg="Created default organization"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=secrets t=2025-10-12T20:58:59.295269101Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=plugin.store t=2025-10-12T20:58:59.320661874Z level=info msg="Loading plugins..."
Oct 12 16:58:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct 12 16:58:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=local.finder t=2025-10-12T20:58:59.422885543Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=plugin.store t=2025-10-12T20:58:59.422930364Z level=info msg="Plugins loaded" count=55 duration=102.27014ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=query_data t=2025-10-12T20:58:59.427938742Z level=info msg="Query Service initialization"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=live.push_http t=2025-10-12T20:58:59.433748121Z level=info msg="Live Push Gateway initialization"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.migration t=2025-10-12T20:58:59.437320493Z level=info msg=Starting
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.migration t=2025-10-12T20:58:59.437932589Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.migration orgID=1 t=2025-10-12T20:58:59.438620587Z level=info msg="Migrating alerts for organisation"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.migration orgID=1 t=2025-10-12T20:58:59.439722766Z level=info msg="Alerts found to migrate" alerts=0
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.migration t=2025-10-12T20:58:59.442662761Z level=info msg="Completed alerting migration"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.state.manager t=2025-10-12T20:58:59.47490784Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=infra.usagestats.collector t=2025-10-12T20:58:59.478737938Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=provisioning.datasources t=2025-10-12T20:58:59.48074845Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=provisioning.alerting t=2025-10-12T20:58:59.501637697Z level=info msg="starting to provision alerting"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=provisioning.alerting t=2025-10-12T20:58:59.501672448Z level=info msg="finished to provision alerting"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.state.manager t=2025-10-12T20:58:59.501923384Z level=info msg="Warming state cache for startup"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.multiorg.alertmanager t=2025-10-12T20:58:59.5021351Z level=info msg="Starting MultiOrg Alertmanager"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.state.manager t=2025-10-12T20:58:59.502723275Z level=info msg="State cache has been initialized" states=0 duration=799.191µs
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ngalert.scheduler t=2025-10-12T20:58:59.502803107Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ticker t=2025-10-12T20:58:59.502884449Z level=info msg=starting first_tick=2025-10-12T20:59:00Z
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=grafanaStorageLogger t=2025-10-12T20:58:59.504698766Z level=info msg="Storage starting"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=http.server t=2025-10-12T20:58:59.50644667Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=http.server t=2025-10-12T20:58:59.506890613Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=provisioning.dashboard t=2025-10-12T20:58:59.584407705Z level=info msg="starting to provision dashboards"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=grafana.update.checker t=2025-10-12T20:58:59.590203334Z level=info msg="Update check succeeded" duration=86.917625ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=plugins.update.checker t=2025-10-12T20:58:59.596500757Z level=info msg="Update check succeeded" duration=92.789987ms
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore.transactions t=2025-10-12T20:58:59.63163691Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 12 16:58:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 7 peering, 330 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 133 B/s, 1 keys/s, 6 objects/s recovering
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:59 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0340016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore.transactions t=2025-10-12T20:58:59.651754997Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Oct 12 16:58:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:58:59 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=provisioning.dashboard t=2025-10-12T20:58:59.827331641Z level=info msg="finished to provision dashboards"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=grafana-apiserver t=2025-10-12T20:58:59.864797354Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 12 16:58:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=grafana-apiserver t=2025-10-12T20:58:59.865479403Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 12 16:59:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.2 deep-scrub starts
Oct 12 16:59:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.2 deep-scrub ok
Oct 12 16:59:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:00.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct 12 16:59:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:00.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.xctvez on compute-2
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.xctvez on compute-2
Oct 12 16:59:00 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 26 completed events
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:59:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:00 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:01 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct 12 16:59:01 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: Deploying daemon keepalived.rgw.default.compute-2.xctvez on compute-2
Oct 12 16:59:01 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 7 peering, 330 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 117 B/s, 1 keys/s, 5 objects/s recovering
Oct 12 16:59:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:01 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:01 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe0340016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:02 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 12 16:59:02 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 12 16:59:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:02.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:59:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.ojrghf on compute-0
Oct 12 16:59:02 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.ojrghf on compute-0
Oct 12 16:59:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:02 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060001c00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:03 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Oct 12 16:59:03 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.600483206 +0000 UTC m=+0.070181065 container create f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, description=keepalived for Ceph)
Oct 12 16:59:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 1 keys/s, 3 objects/s recovering
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 12 16:59:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:03 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:03 np0005481680 systemd[1]: Started libpod-conmon-f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3.scope.
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.571783039 +0000 UTC m=+0.041480948 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 12 16:59:03 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:03 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.710540297 +0000 UTC m=+0.180238196 container init f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.722037282 +0000 UTC m=+0.191735151 container start f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.726715492 +0000 UTC m=+0.196413361 container attach f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived)
Oct 12 16:59:03 np0005481680 peaceful_mirzakhani[99216]: 0 0
Oct 12 16:59:03 np0005481680 systemd[1]: libpod-f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3.scope: Deactivated successfully.
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.730421178 +0000 UTC m=+0.200119047 container died f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 12 16:59:03 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ef7d7c9e5c966552f2203ed0c7c9eb7c4f52cf3e0a7d059cb4903af11fe9e91f-merged.mount: Deactivated successfully.
Oct 12 16:59:03 np0005481680 podman[99199]: 2025-10-12 20:59:03.79274175 +0000 UTC m=+0.262439619 container remove f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3 (image=quay.io/ceph/keepalived:2.2.4, name=peaceful_mirzakhani, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9)
Oct 12 16:59:03 np0005481680 systemd[1]: libpod-conmon-f18afadf75151f8ff737541ca9344f54df65b31d0d8324a428a49625216523d3.scope: Deactivated successfully.
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: Deploying daemon keepalived.rgw.default.compute-0.ojrghf on compute-0
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 12 16:59:03 np0005481680 systemd[1]: Reloading.
Oct 12 16:59:03 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 12 16:59:04 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:59:04 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:59:04 np0005481680 systemd[1]: Reloading.
Oct 12 16:59:04 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:59:04 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:59:04 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Oct 12 16:59:04 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Oct 12 16:59:04 np0005481680 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.ojrghf for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:04.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 12 16:59:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 12 16:59:04 np0005481680 podman[99363]: 2025-10-12 20:59:04.852039356 +0000 UTC m=+0.070331989 container create 5f725b9c8467e3ca5bdcbb30583fc8c13de160bc83193ea72e1bc44aea455ea1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 12 16:59:04 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 12 16:59:04 np0005481680 podman[99363]: 2025-10-12 20:59:04.823333419 +0000 UTC m=+0.041626102 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 12 16:59:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e02adbdca68146533ddc02bf8ecc0328deab552a57d53ef1ad8f10d9df9564/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:04 np0005481680 podman[99363]: 2025-10-12 20:59:04.939329272 +0000 UTC m=+0.157621965 container init 5f725b9c8467e3ca5bdcbb30583fc8c13de160bc83193ea72e1bc44aea455ea1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, name=keepalived, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 12 16:59:04 np0005481680 podman[99363]: 2025-10-12 20:59:04.948355053 +0000 UTC m=+0.166647686 container start 5f725b9c8467e3ca5bdcbb30583fc8c13de160bc83193ea72e1bc44aea455ea1 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, release=1793, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9)
Oct 12 16:59:04 np0005481680 bash[99363]: 5f725b9c8467e3ca5bdcbb30583fc8c13de160bc83193ea72e1bc44aea455ea1
Oct 12 16:59:04 np0005481680 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.ojrghf for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:04 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Starting VRRP child process, pid=4
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: Startup complete
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:59:04 2025: (VI_0) Entering BACKUP STATE
Oct 12 16:59:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:04 2025: (VI_0) Entering BACKUP STATE (init)
Oct 12 16:59:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:05 2025: VRRP_Script(check_backend) succeeded
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev a000fc3f-59b4-42d0-9304-83e8fbe05480 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event a000fc3f-59b4-42d0-9304-83e8fbe05480 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [progress INFO root] update: starting ev be960d0d-752e-4d4b-90f2-69a5ce7fb521 (Updating prometheus deployment (+1 -> 1))
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 12 16:59:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:59:05 2025: (VI_0) Entering MASTER STATE
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 12 16:59:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:05 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 70 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 70 pg[10.1d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 70 pg[10.5( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:05 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 70 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:05 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: Deploying daemon prometheus.compute-0 on compute-0
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 27 completed events
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:59:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:05 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event 7a7f911e-f990-4ffc-8ab5-b4bebdbf7c85 (Global Recovery Event) in 10 seconds
Oct 12 16:59:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:06 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct 12 16:59:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc[97508]: Sun Oct 12 20:59:06 2025: (VI_0) received an invalid passwd!
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct 12 16:59:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:06.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 12 16:59:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 12 16:59:06 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.5( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 71 pg[10.1d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=71) [0]/[2] r=-1 lpr=71 pi=[62,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:06 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:06 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:07 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Oct 12 16:59:07 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Oct 12 16:59:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 12 16:59:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:07 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 12 16:59:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 12 16:59:07 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 12 16:59:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:07 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 12 16:59:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-rgw-default-compute-0-ojrghf[99378]: Sun Oct 12 20:59:08 2025: (VI_0) Entering MASTER STATE
Oct 12 16:59:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 12 16:59:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 12 16:59:08 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.5( v 72'1048 (0'0,72'1048] local-lis/les=0/0 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=64'1045 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.15( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.5( v 72'1048 (0'0,72'1048] local-lis/les=0/0 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=64'1045 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.15( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:08 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 73 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:08 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.262336865 +0000 UTC m=+3.217544691 volume create d1e3194d0944a77261a7bcc7d78cd37fc422ceb8d123b0e507a1d8a13e9bd7a9
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.269578311 +0000 UTC m=+3.224786147 container create 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.248734865 +0000 UTC m=+3.203942711 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 12 16:59:09 np0005481680 systemd[1]: Started libpod-conmon-698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1.scope.
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 12 16:59:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/029e4cf3329fec6523bca65720a840e4da747388284296e4d37aeb6278704678/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.36834026 +0000 UTC m=+3.323548196 container init 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.381229412 +0000 UTC m=+3.336437248 container start 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.384717101 +0000 UTC m=+3.339924977 container attach 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 dazzling_kilby[99739]: 65534 65534
Oct 12 16:59:09 np0005481680 systemd[1]: libpod-698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1.scope: Deactivated successfully.
Oct 12 16:59:09 np0005481680 conmon[99739]: conmon 698aae335019cb18cc66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1.scope/container/memory.events
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.386769334 +0000 UTC m=+3.341977180 container died 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-029e4cf3329fec6523bca65720a840e4da747388284296e4d37aeb6278704678-merged.mount: Deactivated successfully.
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.433960148 +0000 UTC m=+3.389168014 container remove 698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_kilby, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99480]: 2025-10-12 20:59:09.439970622 +0000 UTC m=+3.395178488 volume remove d1e3194d0944a77261a7bcc7d78cd37fc422ceb8d123b0e507a1d8a13e9bd7a9
Oct 12 16:59:09 np0005481680 systemd[1]: libpod-conmon-698aae335019cb18cc664da6eb0c9191fc617c2760a478be92cb0f72cb3beeb1.scope: Deactivated successfully.
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.526384374 +0000 UTC m=+0.055004236 volume create e81cbcb2b0b118c2bc50ca05bb66752fdbe8b6eb62d5502d42d8e5d1d2c3d80f
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.538551386 +0000 UTC m=+0.067171288 container create 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 systemd[1]: Started libpod-conmon-13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f.scope.
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.500855007 +0000 UTC m=+0.029474949 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 12 16:59:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01770002c70b32059987be0038904d3ab8cd5c49411b50dea256df953e90c53d/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.61604615 +0000 UTC m=+0.144666012 container init 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.622568317 +0000 UTC m=+0.151188179 container start 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 relaxed_lamarr[99775]: 65534 65534
Oct 12 16:59:09 np0005481680 systemd[1]: libpod-13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f.scope: Deactivated successfully.
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.6254313 +0000 UTC m=+0.154051182 container attach 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.625742399 +0000 UTC m=+0.154362271 container died 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 12 16:59:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-01770002c70b32059987be0038904d3ab8cd5c49411b50dea256df953e90c53d-merged.mount: Deactivated successfully.
Oct 12 16:59:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:09 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.67599365 +0000 UTC m=+0.204613522 container remove 13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f (image=quay.io/prometheus/prometheus:v2.51.0, name=relaxed_lamarr, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:09 np0005481680 podman[99757]: 2025-10-12 20:59:09.680914507 +0000 UTC m=+0.209534379 volume remove e81cbcb2b0b118c2bc50ca05bb66752fdbe8b6eb62d5502d42d8e5d1d2c3d80f
Oct 12 16:59:09 np0005481680 systemd[1]: libpod-conmon-13b59663ae75f1fd5df0879c4780efb25df04885d0f62b69aa01719dd779ea8f.scope: Deactivated successfully.
Oct 12 16:59:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:09 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:09 np0005481680 systemd[1]: Reloading.
Oct 12 16:59:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 12 16:59:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 12 16:59:09 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 74 pg[10.5( v 72'1048 (0'0,72'1048] local-lis/les=73/74 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=72'1048 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 74 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=6 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 74 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:09 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 74 pg[10.15( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=71/62 les/c/f=72/63/0 sis=73) [0] r=0 lpr=73 pi=[62,73)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:09 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:59:09 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:59:10 np0005481680 systemd[1]: Reloading.
Oct 12 16:59:10 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 16:59:10 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 16:59:10 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct 12 16:59:10 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct 12 16:59:10 np0005481680 systemd[1]: Starting Ceph prometheus.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:10.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:10.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:10 np0005481680 podman[99918]: 2025-10-12 20:59:10.689232913 +0000 UTC m=+0.076809186 container create a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:10 np0005481680 podman[99918]: 2025-10-12 20:59:10.65526856 +0000 UTC m=+0.042844873 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct 12 16:59:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fdaaa1ab7bb107d0458e1b03c2e5331d593bbfde647c87ca805d6c5ffafce4d/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fdaaa1ab7bb107d0458e1b03c2e5331d593bbfde647c87ca805d6c5ffafce4d/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:10 np0005481680 podman[99918]: 2025-10-12 20:59:10.778885908 +0000 UTC m=+0.166462181 container init a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:10 np0005481680 podman[99918]: 2025-10-12 20:59:10.784330398 +0000 UTC m=+0.171906661 container start a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:10 np0005481680 bash[99918]: a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175
Oct 12 16:59:10 np0005481680 systemd[1]: Started Ceph prometheus.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.825Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.825Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.825Z caller=main.go:623 level=info host_details="(Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 x86_64 compute-0 (none))"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.825Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.825Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.827Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.828Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.831Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.832Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.835Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.835Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.28µs
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.835Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.835Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.835Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=42.421µs wal_replay_duration=254.297µs wbl_replay_duration=180ns total_replay_duration=350.82µs
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.837Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.837Z caller=main.go:1153 level=info msg="TSDB started"
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.837Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.866Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=29.086859ms db_storage=1.39µs remote_storage=1.28µs web_handler=360ns query_engine=910ns scrape=5.389148ms scrape_sd=154.784µs notify=15.201µs notify_sd=13.27µs rules=22.9459ms tracing=12.171µs
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.866Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0[99933]: ts=2025-10-12T20:59:10.867Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mgr[73901]: [progress INFO root] complete: finished ev be960d0d-752e-4d4b-90f2-69a5ce7fb521 (Updating prometheus deployment (+1 -> 1))
Oct 12 16:59:10 np0005481680 ceph-mgr[73901]: [progress INFO root] Completed event be960d0d-752e-4d4b-90f2-69a5ce7fb521 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 12 16:59:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:10 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:10 np0005481680 ceph-mgr[73901]: [progress INFO root] Writing back 29 completed events
Oct 12 16:59:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 12 16:59:11 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:11 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 12 16:59:11 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 12 16:59:11 np0005481680 systemd-logind[783]: New session 38 of user zuul.
Oct 12 16:59:11 np0005481680 systemd[1]: Started Session 38 of User zuul.
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 4 unknown, 4 active+remapped, 2 peering, 327 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:11 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:11 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:11 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  1: '-n'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  2: 'mgr.compute-0.fmjeht'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  3: '-f'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  4: '--setuser'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  5: 'ceph'
Oct 12 16:59:11 np0005481680 ceph-mgr[73901]: mgr respawn  6: '--setgroup'
Oct 12 16:59:11 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.fmjeht(active, since 96s), standbys: compute-1.orllvh, compute-2.iamnla
Oct 12 16:59:12 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:12 np0005481680 ceph-mon[73608]: from='mgr.14436 192.168.122.100:0/1376198090' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 12 16:59:12 np0005481680 systemd[1]: session-36.scope: Deactivated successfully.
Oct 12 16:59:12 np0005481680 systemd[1]: session-36.scope: Consumed 53.918s CPU time.
Oct 12 16:59:12 np0005481680 systemd-logind[783]: Session 36 logged out. Waiting for processes to exit.
Oct 12 16:59:12 np0005481680 systemd-logind[783]: Removed session 36.
Oct 12 16:59:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setuser ceph since I am not root
Oct 12 16:59:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ignoring --setgroup ceph since I am not root
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: pidfile_write: ignore empty --pid-file
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'alerts'
Oct 12 16:59:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:12.222+0000 7f3850d74140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'balancer'
Oct 12 16:59:12 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 12 16:59:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:12.302+0000 7f3850d74140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'cephadm'
Oct 12 16:59:12 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 12 16:59:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:12 np0005481680 python3.9[100126]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:59:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:12 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'crash'
Oct 12 16:59:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:12 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:13.059+0000 7f3850d74140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'dashboard'
Oct 12 16:59:13 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 12 16:59:13 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:13 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'devicehealth'
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:13 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:13.737+0000 7f3850d74140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'diskprediction_local'
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]:  from numpy import show_config as show_numpy_config
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:13.889+0000 7f3850d74140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'influx'
Oct 12 16:59:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:13.956+0000 7f3850d74140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 12 16:59:13 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'insights'
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'iostat'
Oct 12 16:59:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:14.084+0000 7f3850d74140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'k8sevents'
Oct 12 16:59:14 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Oct 12 16:59:14 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'localpool'
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mds_autoscaler'
Oct 12 16:59:14 np0005481680 python3.9[100353]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:59:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:14.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'mirroring'
Oct 12 16:59:14 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'nfs'
Oct 12 16:59:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:14 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c0016a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.108+0000 7f3850d74140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'orchestrator'
Oct 12 16:59:15 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 12 16:59:15 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.316+0000 7f3850d74140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_perf_query'
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.390+0000 7f3850d74140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'osd_support'
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.461+0000 7f3850d74140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'pg_autoscaler'
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.546+0000 7f3850d74140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'progress'
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.619+0000 7f3850d74140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'prometheus'
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:15 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:15 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:15.959+0000 7f3850d74140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 12 16:59:15 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rbd_support'
Oct 12 16:59:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:16.054+0000 7f3850d74140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:59:16 np0005481680 ceph-mgr[73901]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 12 16:59:16 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'restful'
Oct 12 16:59:16 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rgw'
Oct 12 16:59:16 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 12 16:59:16 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 12 16:59:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:16.478+0000 7f3850d74140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:59:16 np0005481680 ceph-mgr[73901]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 12 16:59:16 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'rook'
Oct 12 16:59:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:16.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:16 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.001+0000 7f3850d74140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'selftest'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.065+0000 7f3850d74140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'snap_schedule'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.139+0000 7f3850d74140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'stats'
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'status'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.272+0000 7f3850d74140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telegraf'
Oct 12 16:59:17 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 12 16:59:17 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.336+0000 7f3850d74140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'telemetry'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.476+0000 7f3850d74140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'test_orchestrator'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:17 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.709+0000 7f3850d74140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'volumes'
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:17 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:17 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh restarted
Oct 12 16:59:17 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.orllvh started
Oct 12 16:59:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:17.990+0000 7f3850d74140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 12 16:59:17 np0005481680 ceph-mgr[73901]: mgr[py] Loading python module 'zabbix'
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla restarted
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.iamnla started
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.058+0000 7f3850d74140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fmjeht restarted
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fmjeht
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: ms_deliver_dispatch: unhandled message 0x5610367a7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map Activating!
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.fmjeht(active, starting, since 0.037524s), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr handle_mgr_map I am now activating
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: Active manager daemon compute-0.fmjeht restarted
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: Activating manager daemon compute-0.fmjeht
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.nlzxsf"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.nlzxsf"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 all = 0
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vonnzo"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vonnzo"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 all = 0
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ophvii"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ophvii"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 all = 0
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fmjeht", "id": "compute-0.fmjeht"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iamnla", "id": "compute-2.iamnla"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mgr metadata", "who": "compute-1.orllvh", "id": "compute-1.orllvh"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 all = 1
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: balancer
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Starting
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : Manager daemon compute-0.fmjeht is now available
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_20:59:18
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: cephadm
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: crash
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: dashboard
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO access_control] Loading user roles DB version=2
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO sso] Loading SSO DB version=1
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: devicehealth
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Starting
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: iostat
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: nfs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: orchestrator
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: pg_autoscaler
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: progress
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [progress INFO root] Loading...
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f37cf09c640>, <progress.module.GhostEvent object at 0x7f37cf09c670>, <progress.module.GhostEvent object at 0x7f37cf09c6a0>, <progress.module.GhostEvent object at 0x7f37cf09c6d0>, <progress.module.GhostEvent object at 0x7f37cf09c700>, <progress.module.GhostEvent object at 0x7f37cf09c730>, <progress.module.GhostEvent object at 0x7f37cf09c760>, <progress.module.GhostEvent object at 0x7f37cf09c790>, <progress.module.GhostEvent object at 0x7f37cf09c7c0>, <progress.module.GhostEvent object at 0x7f37cf09c7f0>, <progress.module.GhostEvent object at 0x7f37cf09c820>, <progress.module.GhostEvent object at 0x7f37cf09c850>, <progress.module.GhostEvent object at 0x7f37cf09c880>, <progress.module.GhostEvent object at 0x7f37cf09c8b0>, <progress.module.GhostEvent object at 0x7f37cf09c8e0>, <progress.module.GhostEvent object at 0x7f37cf09c910>, <progress.module.GhostEvent object at 0x7f37cf09c940>, <progress.module.GhostEvent object at 0x7f37cf09c970>, <progress.module.GhostEvent object at 0x7f37cf09c9a0>, <progress.module.GhostEvent object at 0x7f37cf09c9d0>, <progress.module.GhostEvent object at 0x7f37cf09ca00>, <progress.module.GhostEvent object at 0x7f37cf09ca30>, <progress.module.GhostEvent object at 0x7f37cf09ca60>, <progress.module.GhostEvent object at 0x7f37cf09ca90>, <progress.module.GhostEvent object at 0x7f37cf09cac0>, <progress.module.GhostEvent object at 0x7f37cf09caf0>, <progress.module.GhostEvent object at 0x7f37cf09cb20>, <progress.module.GhostEvent object at 0x7f37cf09cb50>, <progress.module.GhostEvent object at 0x7f37cf09cb80>] historic events
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [progress INFO root] Loaded OSDMap, ready.
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: prometheus
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO root] Cache enabled
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO root] starting metric collection thread
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO root] Starting engine...
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:18] ENGINE Bus STARTING
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:18] ENGINE Bus STARTING
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: CherryPy Checker:
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: The Application mounted at '' has an empty config.
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] recovery thread starting
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] starting setup
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: rbd_support
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: restful
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: status
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: telemetry
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [restful INFO root] server_addr: :: server_port: 8003
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [restful WARNING root] server not running: no certificate configured
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 12 16:59:18 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1e deep-scrub starts
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] PerfHandler: starting
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: mgr load Constructed class from module: volumes
Oct 12 16:59:18 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1e deep-scrub ok
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_task_task: images, start_after=
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.335+0000 7f37bacc4640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.337+0000 7f37bdcca640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.337+0000 7f37bdcca640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.337+0000 7f37bdcca640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.337+0000 7f37bdcca640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T20:59:18.337+0000 7f37bdcca640 -1 client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: client.0 error registering admin socket command: (17) File exists
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TaskHandler: starting
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"} v 0)
Oct 12 16:59:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] setup complete
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:18] ENGINE Serving on http://:::9283
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:18] ENGINE Bus STARTED
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:18] ENGINE Serving on http://:::9283
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:18] ENGINE Bus STARTED
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [prometheus INFO root] Engine started.
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct 12 16:59:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:18 np0005481680 systemd-logind[783]: New session 39 of user ceph-admin.
Oct 12 16:59:18 np0005481680 systemd[1]: Started Session 39 of User ceph-admin.
Oct 12 16:59:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:18 np0005481680 ceph-mgr[73901]: [dashboard INFO dashboard.module] Engine started.
Oct 12 16:59:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:18 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: Manager daemon compute-0.fmjeht is now available
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/mirror_snapshot_schedule"}]: dispatch
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fmjeht/trash_purge_schedule"}]: dispatch
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.fmjeht(active, since 1.09013s), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:19 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 12 16:59:19 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 12 16:59:19 np0005481680 podman[100677]: 2025-10-12 20:59:19.439050209 +0000 UTC m=+0.090447986 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 16:59:19 np0005481680 podman[100677]: 2025-10-12 20:59:19.562251256 +0000 UTC m=+0.213649023 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:19 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:59:19] ENGINE Bus STARTING
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:59:19] ENGINE Bus STARTING
Oct 12 16:59:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:19 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:59:19] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:59:19] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:59:19] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:59:19] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:59:19] ENGINE Bus STARTED
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:59:19] ENGINE Bus STARTED
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: [cephadm INFO cherrypy.error] [12/Oct/2025:20:59:19] ENGINE Client ('192.168.122.100', 34002) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:59:19 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : [12/Oct/2025:20:59:19] ENGINE Client ('192.168.122.100', 34002) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:59:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 12 16:59:20 np0005481680 podman[100816]: 2025-10-12 20:59:20.129341687 +0000 UTC m=+0.084907113 container exec 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:20 np0005481680 podman[100816]: 2025-10-12 20:59:20.141363597 +0000 UTC m=+0.096929023 container exec_died 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:59:19] ENGINE Bus STARTING
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:59:19] ENGINE Serving on http://192.168.122.100:8765
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:59:19] ENGINE Serving on https://192.168.122.100:7150
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:59:19] ENGINE Bus STARTED
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: [12/Oct/2025:20:59:19] ENGINE Client ('192.168.122.100', 34002) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[6.e( empty local-lis/les=0/0 n=0 ec=51/17 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[6.6( empty local-lis/les=0/0 n=0 ec=51/17 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.668713570s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 active pruub 216.010772705s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.668680191s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 216.010772705s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.667205811s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 active pruub 216.010787964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.667180061s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 216.010787964s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.666682243s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 active pruub 216.010787964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.666660309s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 216.010787964s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.666063309s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 active pruub 216.010681152s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 76 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=76 pruub=8.666040421s) [1] r=-1 lpr=76 pi=[66,76)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 216.010681152s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.fmjeht(active, since 2s), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:59:20 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 12 16:59:20 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 12 16:59:20 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 12 16:59:20 np0005481680 podman[100924]: 2025-10-12 20:59:20.543591289 +0000 UTC m=+0.066391098 container exec faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:20 np0005481680 podman[100924]: 2025-10-12 20:59:20.551258226 +0000 UTC m=+0.074058075 container exec_died faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 16:59:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:20.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:20 np0005481680 podman[100990]: 2025-10-12 20:59:20.77736909 +0000 UTC m=+0.060471786 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:59:20 np0005481680 podman[100990]: 2025-10-12 20:59:20.787349886 +0000 UTC m=+0.070452582 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:59:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:20 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:21 np0005481680 podman[101055]: 2025-10-12 20:59:21.006956353 +0000 UTC m=+0.067724292 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git)
Oct 12 16:59:21 np0005481680 podman[101055]: 2025-10-12 20:59:21.018444848 +0000 UTC m=+0.079212767 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[6.6( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=76/77 n=1 ec=51/17 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=47'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 77 pg[6.e( v 47'39 lc 46'19 (0'0,47'39] local-lis/les=76/77 n=1 ec=51/17 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=47'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:21 np0005481680 podman[101140]: 2025-10-12 20:59:21.235739486 +0000 UTC m=+0.062878448 container exec 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:21 np0005481680 podman[101140]: 2025-10-12 20:59:21.264575757 +0000 UTC m=+0.091714719 container exec_died 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Oct 12 16:59:21 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:59:21 np0005481680 systemd[1]: session-38.scope: Deactivated successfully.
Oct 12 16:59:21 np0005481680 systemd[1]: session-38.scope: Consumed 8.359s CPU time.
Oct 12 16:59:21 np0005481680 systemd-logind[783]: Session 38 logged out. Waiting for processes to exit.
Oct 12 16:59:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:21 np0005481680 systemd-logind[783]: Removed session 38.
Oct 12 16:59:21 np0005481680 podman[101209]: 2025-10-12 20:59:21.514665148 +0000 UTC m=+0.055884519 container exec d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:21 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:21 np0005481680 podman[101209]: 2025-10-12 20:59:21.671762167 +0000 UTC m=+0.212981488 container exec_died d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:21 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:22] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Oct 12 16:59:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:22] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Oct 12 16:59:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v7: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 12 16:59:22 np0005481680 podman[101314]: 2025-10-12 20:59:22.2078166 +0000 UTC m=+0.099602822 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 78 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 78 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 78 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 78 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[66,77)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:22 np0005481680 podman[101314]: 2025-10-12 20:59:22.245494189 +0000 UTC m=+0.137280411 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:59:22 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.fmjeht(active, since 4s), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 16:59:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:22 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993643761s) [1] async=[1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 48'1034 active pruub 225.370697021s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993703842s) [1] async=[1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 48'1034 active pruub 225.370727539s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.1e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993578911s) [1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 225.370727539s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993458748s) [1] async=[1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 48'1034 active pruub 225.370773315s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.6( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993574142s) [1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 225.370697021s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.e( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=6 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993381500s) [1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 225.370773315s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993169785s) [1] async=[1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 48'1034 active pruub 225.370834351s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 79 pg[10.16( v 48'1034 (0'0,48'1034] local-lis/les=77/78 n=5 ec=55/42 lis/c=77/66 les/c/f=78/67/0 sis=79 pruub=14.993003845s) [1] r=-1 lpr=79 pi=[66,79)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 225.370834351s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 12 16:59:23 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:23 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:59:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:59:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:23 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v10: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 80 pg[6.8( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=80 pruub=11.705508232s) [1] r=-1 lpr=80 pi=[51,80)/1 crt=47'39 lcod 0'0 mlcod 0'0 active pruub 223.088485718s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 80 pg[6.8( v 47'39 (0'0,47'39] local-lis/les=51/53 n=0 ec=51/17 lis/c=51/51 les/c/f=53/53/0 sis=80 pruub=11.705471992s) [1] r=-1 lpr=80 pi=[51,80)/1 crt=47'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.088485718s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 80 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=80) [0] r=0 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 80 pg[10.18( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=80) [0] r=0 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 12 16:59:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 12 16:59:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 12 16:59:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 81 pg[10.18( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[55,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 81 pg[10.18( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[55,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[55,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:24 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[55,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:24 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.conf
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 12 16:59:25 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 12 16:59:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:25 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 12 16:59:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:25 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:25 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v14: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s; 56 B/s, 5 objects/s recovering
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: Updating compute-1:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: Updating compute-0:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: Updating compute-2:/var/lib/ceph/5adb8c35-1b74-5730-a252-62321f654cd5/config/ceph.client.admin.keyring
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:59:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 12 16:59:26 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 83 pg[10.8( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 83 pg[10.8( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=6 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 83 pg[10.18( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 83 pg[10.18( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:27 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:59:27 np0005481680 podman[102495]: 2025-10-12 20:59:27.360884776 +0000 UTC m=+0.059429759 container create 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:59:27 np0005481680 systemd[1]: Started libpod-conmon-9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4.scope.
Oct 12 16:59:27 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:27 np0005481680 podman[102495]: 2025-10-12 20:59:27.340503672 +0000 UTC m=+0.039048675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:27 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 12 16:59:27 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 12 16:59:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:27 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 12 16:59:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:27 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 12 16:59:28 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 12 16:59:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v17: 337 pgs: 4 remapped+peering, 4 peering, 329 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:28 np0005481680 podman[102495]: 2025-10-12 20:59:28.276453687 +0000 UTC m=+0.974998650 container init 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 16:59:28 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 84 pg[10.8( v 48'1034 (0'0,48'1034] local-lis/les=83/84 n=6 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:28 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 84 pg[10.18( v 48'1034 (0'0,48'1034] local-lis/les=83/84 n=5 ec=55/42 lis/c=81/55 les/c/f=82/56/0 sis=83) [0] r=0 lpr=83 pi=[55,83)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:28 np0005481680 podman[102495]: 2025-10-12 20:59:28.289214765 +0000 UTC m=+0.987759728 container start 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:59:28 np0005481680 podman[102495]: 2025-10-12 20:59:28.293722061 +0000 UTC m=+0.992267044 container attach 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:28 np0005481680 lucid_shtern[102511]: 167 167
Oct 12 16:59:28 np0005481680 systemd[1]: libpod-9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4.scope: Deactivated successfully.
Oct 12 16:59:28 np0005481680 podman[102495]: 2025-10-12 20:59:28.298149365 +0000 UTC m=+0.996694348 container died 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:28 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4958249d45c2fa7bfd47ccca90d813f4527718518da9b43429130487e746ba06-merged.mount: Deactivated successfully.
Oct 12 16:59:28 np0005481680 podman[102495]: 2025-10-12 20:59:28.343726176 +0000 UTC m=+1.042271149 container remove 9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_shtern, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 16:59:28 np0005481680 systemd[1]: libpod-conmon-9157e25a66545b24f69574da7848089dd0e27a896e8137d6c8475393f90959a4.scope: Deactivated successfully.
Oct 12 16:59:28 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 12 16:59:28 np0005481680 podman[102534]: 2025-10-12 20:59:28.536328349 +0000 UTC m=+0.055022726 container create 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 16:59:28 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 12 16:59:28 np0005481680 systemd[1]: Started libpod-conmon-03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905.scope.
Oct 12 16:59:28 np0005481680 podman[102534]: 2025-10-12 20:59:28.504810638 +0000 UTC m=+0.023505065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:28 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:28 np0005481680 podman[102534]: 2025-10-12 20:59:28.638160898 +0000 UTC m=+0.156855275 container init 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:28 np0005481680 podman[102534]: 2025-10-12 20:59:28.655635246 +0000 UTC m=+0.174329593 container start 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:28 np0005481680 podman[102534]: 2025-10-12 20:59:28.658462359 +0000 UTC m=+0.177156746 container attach 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:29 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:29 np0005481680 beautiful_wozniak[102551]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:59:29 np0005481680 beautiful_wozniak[102551]: --> All data devices are unavailable
Oct 12 16:59:29 np0005481680 systemd[1]: libpod-03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905.scope: Deactivated successfully.
Oct 12 16:59:29 np0005481680 podman[102567]: 2025-10-12 20:59:29.096044351 +0000 UTC m=+0.038932002 container died 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:59:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-70de6e1258c50b4f59ec9e0c26f507f9bd7acd8219e4d33e91bbca909e74ab71-merged.mount: Deactivated successfully.
Oct 12 16:59:29 np0005481680 podman[102567]: 2025-10-12 20:59:29.149973107 +0000 UTC m=+0.092860708 container remove 03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 16:59:29 np0005481680 systemd[1]: libpod-conmon-03a4d6f89f677cd9e18d6f7d584aeb41fa292fd62173020ec8954f2b78b48905.scope: Deactivated successfully.
Oct 12 16:59:29 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 12 16:59:29 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 12 16:59:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:29 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:29 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.882731747 +0000 UTC m=+0.061589204 container create d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 16:59:29 np0005481680 systemd[1]: Started libpod-conmon-d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f.scope.
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.860240849 +0000 UTC m=+0.039098306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.973513782 +0000 UTC m=+0.152371249 container init d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.979767993 +0000 UTC m=+0.158625420 container start d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.983230822 +0000 UTC m=+0.162088319 container attach d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:29 np0005481680 nostalgic_ritchie[102689]: 167 167
Oct 12 16:59:29 np0005481680 systemd[1]: libpod-d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f.scope: Deactivated successfully.
Oct 12 16:59:29 np0005481680 podman[102674]: 2025-10-12 20:59:29.985268864 +0000 UTC m=+0.164126331 container died d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-89cdb43028216f8cfac12a44734e74fec5b48d385c9212fa6049e199f53213d4-merged.mount: Deactivated successfully.
Oct 12 16:59:30 np0005481680 podman[102674]: 2025-10-12 20:59:30.032227161 +0000 UTC m=+0.211084618 container remove d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 16:59:30 np0005481680 systemd[1]: libpod-conmon-d2327cbef08c719cc50ddad4d9d1a7e97fe4ab4de9602cd8a7761147ef76065f.scope: Deactivated successfully.
Oct 12 16:59:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 141 B/s, 5 objects/s recovering
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.228847447 +0000 UTC m=+0.061134923 container create 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 16:59:30 np0005481680 systemd[1]: Started libpod-conmon-3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd.scope.
Oct 12 16:59:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 12 16:59:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e758358f5aa2fd07a071fd74bfe4ea0b3ff426abc246a43d07b3e7e03cff66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e758358f5aa2fd07a071fd74bfe4ea0b3ff426abc246a43d07b3e7e03cff66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e758358f5aa2fd07a071fd74bfe4ea0b3ff426abc246a43d07b3e7e03cff66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e758358f5aa2fd07a071fd74bfe4ea0b3ff426abc246a43d07b3e7e03cff66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.208993436 +0000 UTC m=+0.041280932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.316323966 +0000 UTC m=+0.148611532 container init 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 12 16:59:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 12 16:59:30 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 85 pg[6.9( empty local-lis/les=0/0 n=0 ec=51/17 lis/c=59/59 les/c/f=60/60/0 sis=85) [0] r=0 lpr=85 pi=[59,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.33238612 +0000 UTC m=+0.164673616 container start 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.336735221 +0000 UTC m=+0.169022727 container attach 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:59:30 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 12 16:59:30 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 12 16:59:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]: {
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:    "0": [
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:        {
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "devices": [
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "/dev/loop3"
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            ],
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "lv_name": "ceph_lv0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "lv_size": "21470642176",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "name": "ceph_lv0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "tags": {
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.cluster_name": "ceph",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.crush_device_class": "",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.encrypted": "0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.osd_id": "0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.type": "block",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.vdo": "0",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:                "ceph.with_tpm": "0"
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            },
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "type": "block",
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:            "vg_name": "ceph_vg0"
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:        }
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]:    ]
Oct 12 16:59:30 np0005481680 magical_aryabhata[102730]: }
Oct 12 16:59:30 np0005481680 systemd[1]: libpod-3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd.scope: Deactivated successfully.
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.65835297 +0000 UTC m=+0.490640466 container died 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct 12 16:59:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-12e758358f5aa2fd07a071fd74bfe4ea0b3ff426abc246a43d07b3e7e03cff66-merged.mount: Deactivated successfully.
Oct 12 16:59:30 np0005481680 podman[102713]: 2025-10-12 20:59:30.715355506 +0000 UTC m=+0.547643002 container remove 3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:59:30 np0005481680 systemd[1]: libpod-conmon-3cb53bca1f53e887b23acba85512136dc2468242f411789ee4808c5b642c8fcd.scope: Deactivated successfully.
Oct 12 16:59:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:31 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 12 16:59:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 12 16:59:31 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 12 16:59:31 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 86 pg[6.9( v 47'39 (0'0,47'39] local-lis/les=85/86 n=0 ec=51/17 lis/c=59/59 les/c/f=60/60/0 sis=85) [0] r=0 lpr=85 pi=[59,85)/1 crt=47'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 12 16:59:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.464537508 +0000 UTC m=+0.063686988 container create 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:31 np0005481680 systemd[1]: Started libpod-conmon-734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b.scope.
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.436504238 +0000 UTC m=+0.035653778 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.560627989 +0000 UTC m=+0.159777469 container init 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.569850426 +0000 UTC m=+0.168999916 container start 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.573628253 +0000 UTC m=+0.172777733 container attach 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:31 np0005481680 adoring_mendel[102862]: 167 167
Oct 12 16:59:31 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 12 16:59:31 np0005481680 systemd[1]: libpod-734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b.scope: Deactivated successfully.
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.577928974 +0000 UTC m=+0.177078464 container died 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 16:59:31 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 12 16:59:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1d2d16b00e897c5696e807505d0bce85076c1e27bf1e8c8f97e64dfa66631908-merged.mount: Deactivated successfully.
Oct 12 16:59:31 np0005481680 podman[102845]: 2025-10-12 20:59:31.633284367 +0000 UTC m=+0.232433827 container remove 734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:31 np0005481680 systemd[1]: libpod-conmon-734196bec847605db0109a6afdad574a431447b07784cb4b95b3245fbc44a71b.scope: Deactivated successfully.
Oct 12 16:59:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:31 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:31 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:31 np0005481680 podman[102887]: 2025-10-12 20:59:31.84965188 +0000 UTC m=+0.071404407 container create 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 16:59:31 np0005481680 systemd[1]: Started libpod-conmon-35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07.scope.
Oct 12 16:59:31 np0005481680 podman[102887]: 2025-10-12 20:59:31.822150183 +0000 UTC m=+0.043902740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec31fba71601439eb99aad589979ed8c186223309e08aba616e8a5cf89a1d3ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec31fba71601439eb99aad589979ed8c186223309e08aba616e8a5cf89a1d3ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec31fba71601439eb99aad589979ed8c186223309e08aba616e8a5cf89a1d3ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec31fba71601439eb99aad589979ed8c186223309e08aba616e8a5cf89a1d3ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:31 np0005481680 podman[102887]: 2025-10-12 20:59:31.958781975 +0000 UTC m=+0.180534552 container init 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 16:59:31 np0005481680 podman[102887]: 2025-10-12 20:59:31.970635621 +0000 UTC m=+0.192388108 container start 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:31 np0005481680 podman[102887]: 2025-10-12 20:59:31.973913435 +0000 UTC m=+0.195665962 container attach 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 16:59:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:32] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Oct 12 16:59:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:32] "GET /metrics HTTP/1.1" 200 46581 "" "Prometheus/2.51.0"
Oct 12 16:59:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v21: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 142 B/s, 5 objects/s recovering
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 87 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=12.437624931s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=48'1034 mlcod 0'0 active pruub 232.011001587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 87 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=12.437574387s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 232.011001587s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 87 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=12.429932594s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=48'1034 mlcod 0'0 active pruub 232.004592896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 87 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=87 pruub=12.429889679s) [1] r=-1 lpr=87 pi=[66,87)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 232.004592896s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct 12 16:59:32 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct 12 16:59:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:32 np0005481680 lvm[102977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:59:32 np0005481680 lvm[102977]: VG ceph_vg0 finished
Oct 12 16:59:32 np0005481680 clever_jackson[102903]: {}
Oct 12 16:59:32 np0005481680 systemd[1]: libpod-35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07.scope: Deactivated successfully.
Oct 12 16:59:32 np0005481680 podman[102887]: 2025-10-12 20:59:32.831263728 +0000 UTC m=+1.053016225 container died 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:32 np0005481680 systemd[1]: libpod-35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07.scope: Consumed 1.414s CPU time.
Oct 12 16:59:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ec31fba71601439eb99aad589979ed8c186223309e08aba616e8a5cf89a1d3ad-merged.mount: Deactivated successfully.
Oct 12 16:59:32 np0005481680 podman[102887]: 2025-10-12 20:59:32.877508247 +0000 UTC m=+1.099260734 container remove 35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:59:32 np0005481680 systemd[1]: libpod-conmon-35a8170fbff6e39c7e0236eb8330dfb9b698cefeb9a627f792709398850dab07.scope: Deactivated successfully.
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct 12 16:59:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:33 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:33 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 12 16:59:33 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 12 16:59:33 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 12 16:59:33 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 12 16:59:33 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 88 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 88 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 88 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 88 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct 12 16:59:33 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct 12 16:59:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:33 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:33 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:33 np0005481680 systemd[1]: Stopping Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:34 np0005481680 podman[103141]: 2025-10-12 20:59:34.08793652 +0000 UTC m=+0.069502398 container died 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3d9bff4cc76b0d227cf5515b2be51080bc17fb638634b1e2a5d1c12e2630d6d0-merged.mount: Deactivated successfully.
Oct 12 16:59:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v24: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 12 16:59:34 np0005481680 podman[103141]: 2025-10-12 20:59:34.136871028 +0000 UTC m=+0.118436876 container remove 71c05854769de8f45df96fd36b6a056218a7a5e038dbf8c14dad12fbac59a9b6 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:34 np0005481680 bash[103141]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0
Oct 12 16:59:34 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Oct 12 16:59:34 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@node-exporter.compute-0.service: Failed with result 'exit-code'.
Oct 12 16:59:34 np0005481680 systemd[1]: Stopped Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:34 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@node-exporter.compute-0.service: Consumed 2.498s CPU time.
Oct 12 16:59:34 np0005481680 systemd[1]: Starting Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 89 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/17 lis/c=63/63 les/c/f=64/64/0 sis=89) [0] r=0 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 89 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 89 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=6 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[66,88)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:34 np0005481680 podman[103244]: 2025-10-12 20:59:34.624685061 +0000 UTC m=+0.055236241 container create 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8ecce1f2e68016c99b53ae99ff7cca4b2439643fe1ef6b53d92dc3e7bbb7e1/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 12 16:59:34 np0005481680 podman[103244]: 2025-10-12 20:59:34.695397649 +0000 UTC m=+0.125948869 container init 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:34 np0005481680 podman[103244]: 2025-10-12 20:59:34.599413561 +0000 UTC m=+0.029964751 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 12 16:59:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:34.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 90 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=6 ec=55/42 lis/c=88/66 les/c/f=89/67/0 sis=90 pruub=15.776262283s) [1] async=[1] r=-1 lpr=90 pi=[66,90)/1 crt=48'1034 mlcod 48'1034 active pruub 237.630844116s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 90 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=4 ec=55/42 lis/c=88/66 les/c/f=89/67/0 sis=90 pruub=15.771583557s) [1] async=[1] r=-1 lpr=90 pi=[66,90)/1 crt=48'1034 mlcod 48'1034 active pruub 237.626815796s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 90 pg[10.1a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=4 ec=55/42 lis/c=88/66 les/c/f=89/67/0 sis=90 pruub=15.771512032s) [1] r=-1 lpr=90 pi=[66,90)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 237.626815796s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 90 pg[10.a( v 48'1034 (0'0,48'1034] local-lis/les=88/89 n=6 ec=55/42 lis/c=88/66 les/c/f=89/67/0 sis=90 pruub=15.775222778s) [1] r=-1 lpr=90 pi=[66,90)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 237.630844116s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:34 np0005481680 podman[103244]: 2025-10-12 20:59:34.705335405 +0000 UTC m=+0.135886585 container start 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:34 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 90 pg[6.b( v 47'39 lc 0'0 (0'0,47'39] local-lis/les=89/90 n=1 ec=51/17 lis/c=63/63 les/c/f=64/64/0 sis=89) [0] r=0 lpr=89 pi=[63,89)/1 crt=47'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:34 np0005481680 bash[103244]: 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.715Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.715Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.717Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.717Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.719Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.719Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=arp
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=bcache
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=bonding
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=cpu
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=dmi
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=edac
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=entropy
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=filefd
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.720Z caller=node_exporter.go:117 level=info collector=netclass
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=netdev
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=netstat
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=nfs
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=nvme
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=os
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=pressure
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=rapl
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=selinux
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=softnet
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=stat
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=textfile
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=time
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=uname
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=xfs
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.721Z caller=node_exporter.go:117 level=info collector=zfs
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.722Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 12 16:59:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0[103259]: ts=2025-10-12T20:59:34.722Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 12 16:59:34 np0005481680 systemd[1]: Started Ceph node-exporter.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 12 16:59:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 12 16:59:34 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 12 16:59:34 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 12 16:59:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:35 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.354159917 +0000 UTC m=+0.040353489 volume create 20edfe8762393e77580346708d1a16f5b1ff1a11cf6fd13d84391ac9572fe8d0
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.362543013 +0000 UTC m=+0.048736605 container create ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 systemd[1]: Started libpod-conmon-ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5.scope.
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.341690446 +0000 UTC m=+0.027884038 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:59:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b114ac2024e1c5f60ef3ef4f87d57197cc1726a07a0a26366903c4a93e1a114/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.468873967 +0000 UTC m=+0.155067619 container init ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.48375234 +0000 UTC m=+0.169945952 container start ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 eager_moore[103351]: 65534 65534
Oct 12 16:59:35 np0005481680 systemd[1]: libpod-ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5.scope: Deactivated successfully.
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.48843947 +0000 UTC m=+0.174633052 container attach ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.48886173 +0000 UTC m=+0.175055322 container died ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3b114ac2024e1c5f60ef3ef4f87d57197cc1726a07a0a26366903c4a93e1a114-merged.mount: Deactivated successfully.
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.528180411 +0000 UTC m=+0.214373983 container remove ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=eager_moore, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 podman[103335]: 2025-10-12 20:59:35.532190225 +0000 UTC m=+0.218383907 volume remove 20edfe8762393e77580346708d1a16f5b1ff1a11cf6fd13d84391ac9572fe8d0
Oct 12 16:59:35 np0005481680 systemd[1]: libpod-conmon-ff7dd5f979c41d6c9ba2e3bd6da4c4d636098ac0ff25a419fd3165ff83f7d7d5.scope: Deactivated successfully.
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.602716839 +0000 UTC m=+0.047793730 volume create a878bb0a92f1340f7e5ec389489c0dc21eaf1a892275fefb6687e0b7e1ac03c6
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.616528024 +0000 UTC m=+0.061604915 container create 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:35 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe060003cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:35 np0005481680 systemd[1]: Started libpod-conmon-89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a.scope.
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.579507381 +0000 UTC m=+0.024584332 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:59:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641cd599e0e0a8969bed772ab6575bb64ab05b4c10832630b4122054b660f64c/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.710260983 +0000 UTC m=+0.155337854 container init 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.715755734 +0000 UTC m=+0.160832575 container start 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 lucid_edison[103384]: 65534 65534
Oct 12 16:59:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 12 16:59:35 np0005481680 systemd[1]: libpod-89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a.scope: Deactivated successfully.
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.723281788 +0000 UTC m=+0.168358639 container attach 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.72530807 +0000 UTC m=+0.170384921 container died 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:35 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe034003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-641cd599e0e0a8969bed772ab6575bb64ab05b4c10832630b4122054b660f64c-merged.mount: Deactivated successfully.
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.802561436 +0000 UTC m=+0.247638297 container remove 89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a (image=quay.io/prometheus/alertmanager:v0.25.0, name=lucid_edison, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:35 np0005481680 podman[103368]: 2025-10-12 20:59:35.807401781 +0000 UTC m=+0.252478702 volume remove a878bb0a92f1340f7e5ec389489c0dc21eaf1a892275fefb6687e0b7e1ac03c6
Oct 12 16:59:35 np0005481680 systemd[1]: libpod-conmon-89d4b516914cff024222809475955232dc2bcae28bf806bbd98e7bdb7bf34b7a.scope: Deactivated successfully.
Oct 12 16:59:35 np0005481680 systemd[1]: Stopping Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v28: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 3 objects/s recovering
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[98121]: ts=2025-10-12T20:59:36.171Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct 12 16:59:36 np0005481680 podman[103434]: 2025-10-12 20:59:36.18189887 +0000 UTC m=+0.119649607 container died 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b729b0a337d3a404cddeed5de9b56a41a487a8a36062dabcce062852ec2ccaa5-merged.mount: Deactivated successfully.
Oct 12 16:59:36 np0005481680 podman[103434]: 2025-10-12 20:59:36.316541392 +0000 UTC m=+0.254292089 container remove 4620d62e7905a637d1c85d56053b7ae81fd1bc9e1de5c4f1e4d83917c94965c9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:36 np0005481680 podman[103434]: 2025-10-12 20:59:36.337428788 +0000 UTC m=+0.275179485 volume remove b6da8c0369a92b8680304f6a7b8de19c8437a7bb373333a1fe31da1f46a60168
Oct 12 16:59:36 np0005481680 bash[103434]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0
Oct 12 16:59:36 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@alertmanager.compute-0.service: Deactivated successfully.
Oct 12 16:59:36 np0005481680 systemd[1]: Stopped Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:36 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@alertmanager.compute-0.service: Consumed 1.264s CPU time.
Oct 12 16:59:36 np0005481680 systemd[1]: Starting Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=infra.usagestats t=2025-10-12T20:59:36.510411247Z level=info msg="Usage stats are ready to report"
Oct 12 16:59:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:36.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:36 np0005481680 podman[103539]: 2025-10-12 20:59:36.650760835 +0000 UTC m=+0.043607532 volume create 517bf4f565ca507bd2e30256c835dc6c441975122ba6ae6d8be04f7a4f8d646a
Oct 12 16:59:36 np0005481680 podman[103539]: 2025-10-12 20:59:36.657806076 +0000 UTC m=+0.050652773 container create ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:36 np0005481680 systemd[91683]: Starting Mark boot as successful...
Oct 12 16:59:36 np0005481680 systemd[91683]: Finished Mark boot as successful.
Oct 12 16:59:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447aa58abc6cf4d45e9a37c1060924f02f02bd5dd34b5a83a645b05361428d7e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447aa58abc6cf4d45e9a37c1060924f02f02bd5dd34b5a83a645b05361428d7e/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 12 16:59:36 np0005481680 podman[103539]: 2025-10-12 20:59:36.709466795 +0000 UTC m=+0.102313512 container init ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:36 np0005481680 podman[103539]: 2025-10-12 20:59:36.719084192 +0000 UTC m=+0.111930889 container start ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:36 np0005481680 bash[103539]: ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87
Oct 12 16:59:36 np0005481680 podman[103539]: 2025-10-12 20:59:36.634725043 +0000 UTC m=+0.027571790 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct 12 16:59:36 np0005481680 systemd[1]: Started Ceph alertmanager.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.744Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.745Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.752Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.753Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:36 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 12 16:59:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.798Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.800Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.805Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct 12 16:59:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:36.805Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct 12 16:59:36 np0005481680 ceph-mgr[73901]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct 12 16:59:36 np0005481680 ceph-mgr[73901]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct 12 16:59:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:37 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe064004b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:37 np0005481680 systemd-logind[783]: New session 40 of user zuul.
Oct 12 16:59:37 np0005481680 systemd[1]: Started Session 40 of User zuul.
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.392488637 +0000 UTC m=+0.058772033 container create 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 systemd[1]: Started libpod-conmon-39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b.scope.
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.36229595 +0000 UTC m=+0.028579396 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:59:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.477113112 +0000 UTC m=+0.143396528 container init 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.482903652 +0000 UTC m=+0.149187028 container start 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 laughing_ganguly[103690]: 472 0
Oct 12 16:59:37 np0005481680 systemd[1]: libpod-39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b.scope: Deactivated successfully.
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.486132425 +0000 UTC m=+0.152415801 container attach 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.486433892 +0000 UTC m=+0.152717268 container died 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c719999c00830fe6c85bbf4e1c4541c67c94f988248a07f3a7f6e6d37cbaa52d-merged.mount: Deactivated successfully.
Oct 12 16:59:37 np0005481680 podman[103651]: 2025-10-12 20:59:37.530749712 +0000 UTC m=+0.197033088 container remove 39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b (image=quay.io/ceph/grafana:10.4.0, name=laughing_ganguly, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 systemd[1]: libpod-conmon-39ff55a28efd8eaa70a977ddfe20ad369e297a802cdf66646a0c3542e221da3b.scope: Deactivated successfully.
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.628517386 +0000 UTC m=+0.070087564 container create a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:37 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:37 np0005481680 systemd[1]: Started libpod-conmon-a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f.scope.
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.598615406 +0000 UTC m=+0.040185664 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:59:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.716139258 +0000 UTC m=+0.157709476 container init a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.725959811 +0000 UTC m=+0.167529999 container start a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 elastic_brown[103755]: 472 0
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.729139732 +0000 UTC m=+0.170710000 container attach a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 systemd[1]: libpod-a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f.scope: Deactivated successfully.
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.731112953 +0000 UTC m=+0.172683141 container died a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 12 16:59:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:37 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040000f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 12 16:59:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 12 16:59:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-694f547f460ac02e69ef14369a50dca7bc6e82f1b5d1248f3005c22dc81b459c-merged.mount: Deactivated successfully.
Oct 12 16:59:37 np0005481680 podman[103738]: 2025-10-12 20:59:37.782748361 +0000 UTC m=+0.224318549 container remove a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f (image=quay.io/ceph/grafana:10.4.0, name=elastic_brown, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:37 np0005481680 systemd[1]: libpod-conmon-a20ea1bfb7a42b89964a45d782b6db6f9c9a956d08d2b696103070fc7887f96f.scope: Deactivated successfully.
Oct 12 16:59:37 np0005481680 systemd[1]: Stopping Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=server t=2025-10-12T20:59:38.112692074Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=ticker t=2025-10-12T20:59:38.112746745Z level=info msg=stopped last_tick=2025-10-12T20:59:30Z
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=tracing t=2025-10-12T20:59:38.112823207Z level=info msg="Closing tracing"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=grafana-apiserver t=2025-10-12T20:59:38.113148227Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v31: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[98789]: logger=sqlstore.transactions t=2025-10-12T20:59:38.12456685Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct 12 16:59:38 np0005481680 podman[103901]: 2025-10-12 20:59:38.143784014 +0000 UTC m=+0.076593270 container died d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bb3eb496a29a72921f6f7f359b9adc83277c2f73b7e8455d5bb1978f9bf4c3a0-merged.mount: Deactivated successfully.
Oct 12 16:59:38 np0005481680 podman[103901]: 2025-10-12 20:59:38.181132704 +0000 UTC m=+0.113941960 container remove d3b819468e082ad58403318d6af80d851e7fbbd82f1bd07fc4841c24ec067260 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:38 np0005481680 bash[103901]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0
Oct 12 16:59:38 np0005481680 python3.9[103888]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 12 16:59:38 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@grafana.compute-0.service: Deactivated successfully.
Oct 12 16:59:38 np0005481680 systemd[1]: Stopped Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:38 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@grafana.compute-0.service: Consumed 4.580s CPU time.
Oct 12 16:59:38 np0005481680 systemd[1]: Starting Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:38 np0005481680 podman[104084]: 2025-10-12 20:59:38.642072795 +0000 UTC m=+0.056212936 container create 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9e01159c49e8b6813392dab7853aad258070a0e6c92d4bbd5974bf2785e746/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9e01159c49e8b6813392dab7853aad258070a0e6c92d4bbd5974bf2785e746/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9e01159c49e8b6813392dab7853aad258070a0e6c92d4bbd5974bf2785e746/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9e01159c49e8b6813392dab7853aad258070a0e6c92d4bbd5974bf2785e746/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9e01159c49e8b6813392dab7853aad258070a0e6c92d4bbd5974bf2785e746/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:38 np0005481680 podman[104084]: 2025-10-12 20:59:38.61774428 +0000 UTC m=+0.031884461 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct 12 16:59:38 np0005481680 podman[104084]: 2025-10-12 20:59:38.716113149 +0000 UTC m=+0.130253320 container init 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:38 np0005481680 podman[104084]: 2025-10-12 20:59:38.728405216 +0000 UTC m=+0.142545367 container start 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:38 np0005481680 bash[104084]: 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e
Oct 12 16:59:38 np0005481680 systemd[1]: Started Ceph grafana.compute-0 for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:38.754Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00059411s
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct 12 16:59:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: [prometheus INFO root] Restarting engine...
Oct 12 16:59:38 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:38] ENGINE Bus STOPPING
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:38] ENGINE Bus STOPPING
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948548066Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-12T20:59:38Z
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948754831Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948761741Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948765501Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948768961Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948772441Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948775711Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948778971Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948782452Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948785822Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948816323Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948821154Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948824584Z level=info msg=Target target=[all]
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948830354Z level=info msg="Path Home" path=/usr/share/grafana
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948833634Z level=info msg="Path Data" path=/var/lib/grafana
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948836894Z level=info msg="Path Logs" path=/var/log/grafana
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948839974Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948843904Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=settings t=2025-10-12T20:59:38.948847144Z level=info msg="App mode production"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=sqlstore t=2025-10-12T20:59:38.94909895Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=sqlstore t=2025-10-12T20:59:38.949112711Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=migrator t=2025-10-12T20:59:38.950473066Z level=info msg="Starting DB migrations"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=migrator t=2025-10-12T20:59:38.966373014Z level=info msg="migrations completed" performed=0 skipped=547 duration=465.431µs
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=sqlstore t=2025-10-12T20:59:38.967257507Z level=info msg="Created default organization"
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=secrets t=2025-10-12T20:59:38.96776689Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct 12 16:59:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugin.store t=2025-10-12T20:59:38.992441525Z level=info msg="Loading plugins..."
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:39 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:39] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:39] ENGINE Bus STOPPED
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:39] ENGINE Bus STOPPED
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:39] ENGINE Bus STARTING
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:39] ENGINE Bus STARTING
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=local.finder t=2025-10-12T20:59:39.065982945Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugin.store t=2025-10-12T20:59:39.066016606Z level=info msg="Plugins loaded" count=55 duration=73.576111ms
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=query_data t=2025-10-12T20:59:39.06890996Z level=info msg="Query Service initialization"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=live.push_http t=2025-10-12T20:59:39.071775614Z level=info msg="Live Push Gateway initialization"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.migration t=2025-10-12T20:59:39.074930176Z level=info msg=Starting
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.state.manager t=2025-10-12T20:59:39.085316023Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=infra.usagestats.collector t=2025-10-12T20:59:39.087229971Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=provisioning.datasources t=2025-10-12T20:59:39.089119601Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=provisioning.alerting t=2025-10-12T20:59:39.110857749Z level=info msg="starting to provision alerting"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=provisioning.alerting t=2025-10-12T20:59:39.11088019Z level=info msg="finished to provision alerting"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafanaStorageLogger t=2025-10-12T20:59:39.11125121Z level=info msg="Storage starting"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.state.manager t=2025-10-12T20:59:39.112415069Z level=info msg="Warming state cache for startup"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=http.server t=2025-10-12T20:59:39.118420644Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=http.server t=2025-10-12T20:59:39.118940577Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.multiorg.alertmanager t=2025-10-12T20:59:39.128683608Z level=info msg="Starting MultiOrg Alertmanager"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:39] ENGINE Serving on http://:::9283
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: [12/Oct/2025:20:59:39] ENGINE Bus STARTED
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:39] ENGINE Serving on http://:::9283
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.error] [12/Oct/2025:20:59:39] ENGINE Bus STARTED
Oct 12 16:59:39 np0005481680 ceph-mgr[73901]: [prometheus INFO root] Engine started.
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.state.manager t=2025-10-12T20:59:39.200886235Z level=info msg="State cache has been initialized" states=0 duration=88.469445ms
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ngalert.scheduler t=2025-10-12T20:59:39.200943006Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=ticker t=2025-10-12T20:59:39.201017248Z level=info msg=starting first_tick=2025-10-12T20:59:40Z
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=provisioning.dashboard t=2025-10-12T20:59:39.204728943Z level=info msg="starting to provision dashboards"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=provisioning.dashboard t=2025-10-12T20:59:39.222475609Z level=info msg="finished to provision dashboards"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugins.update.checker t=2025-10-12T20:59:39.228546765Z level=info msg="Update check succeeded" duration=99.885128ms
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana.update.checker t=2025-10-12T20:59:39.251457095Z level=info msg="Update check succeeded" duration=122.671735ms
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana-apiserver t=2025-10-12T20:59:39.434529161Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana-apiserver t=2025-10-12T20:59:39.435327002Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct 12 16:59:39 np0005481680 python3.9[104294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:39 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:39 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:39 np0005481680 podman[104365]: 2025-10-12 20:59:39.800196924 +0000 UTC m=+0.077300229 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 16:59:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 12 16:59:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:39 np0005481680 podman[104365]: 2025-10-12 20:59:39.919382049 +0000 UTC m=+0.196485334 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v32: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 12 16:59:40 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 12 16:59:40 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 12 16:59:40 np0005481680 podman[104560]: 2025-10-12 20:59:40.50284199 +0000 UTC m=+0.069607760 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:40 np0005481680 podman[104560]: 2025-10-12 20:59:40.508809074 +0000 UTC m=+0.075574844 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 12 16:59:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 12 16:59:40 np0005481680 python3.9[104695]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 16:59:40 np0005481680 podman[104727]: 2025-10-12 20:59:40.978163362 +0000 UTC m=+0.074504327 container exec faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 16:59:40 np0005481680 podman[104727]: 2025-10-12 20:59:40.997363106 +0000 UTC m=+0.093704031 container exec_died faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 16:59:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:41 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:41 np0005481680 podman[104815]: 2025-10-12 20:59:41.27250459 +0000 UTC m=+0.068855902 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:59:41 np0005481680 podman[104815]: 2025-10-12 20:59:41.285394761 +0000 UTC m=+0.081745973 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 16:59:41 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 12 16:59:41 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 12 16:59:41 np0005481680 podman[104935]: 2025-10-12 20:59:41.540591373 +0000 UTC m=+0.059872000 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 12 16:59:41 np0005481680 podman[104935]: 2025-10-12 20:59:41.550351384 +0000 UTC m=+0.069632001 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph)
Oct 12 16:59:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:41 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:41 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040001e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:41 np0005481680 podman[105032]: 2025-10-12 20:59:41.821774672 +0000 UTC m=+0.057857128 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 12 16:59:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 12 16:59:41 np0005481680 podman[105032]: 2025-10-12 20:59:41.85241967 +0000 UTC m=+0.088502066 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 12 16:59:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 12 16:59:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 12 16:59:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:42] "GET /metrics HTTP/1.1" 200 48287 "" "Prometheus/2.51.0"
Oct 12 16:59:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:42] "GET /metrics HTTP/1.1" 200 48287 "" "Prometheus/2.51.0"
Oct 12 16:59:42 np0005481680 python3.9[105115]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 16:59:42 np0005481680 podman[105154]: 2025-10-12 20:59:42.118334578 +0000 UTC m=+0.061737559 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v35: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 12 16:59:42 np0005481680 podman[105154]: 2025-10-12 20:59:42.288268128 +0000 UTC m=+0.231671089 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct 12 16:59:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:42 np0005481680 podman[105339]: 2025-10-12 20:59:42.777751773 +0000 UTC m=+0.082174584 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:42 np0005481680 podman[105339]: 2025-10-12 20:59:42.833019934 +0000 UTC m=+0.137442735 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 96 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=6 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.905416489s) [1] r=-1 lpr=96 pi=[73,96)/1 crt=48'1034 mlcod 0'0 active pruub 244.921203613s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 96 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=6 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.905380249s) [1] r=-1 lpr=96 pi=[73,96)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 244.921203613s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 96 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.903933525s) [1] r=-1 lpr=96 pi=[73,96)/1 crt=48'1034 mlcod 0'0 active pruub 244.921218872s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:42 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 96 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=96 pruub=14.903873444s) [1] r=-1 lpr=96 pi=[73,96)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 244.921218872s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 16:59:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 16:59:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:43 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:43 np0005481680 python3.9[105487]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.530511268 +0000 UTC m=+0.057479869 container create ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:59:43 np0005481680 systemd[1]: Started libpod-conmon-ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe.scope.
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.499027208 +0000 UTC m=+0.025995869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:43 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.629211885 +0000 UTC m=+0.156180486 container init ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.63640211 +0000 UTC m=+0.163370681 container start ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.639838139 +0000 UTC m=+0.166806740 container attach ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:59:43 np0005481680 priceless_kepler[105612]: 167 167
Oct 12 16:59:43 np0005481680 systemd[1]: libpod-ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe.scope: Deactivated successfully.
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.642938778 +0000 UTC m=+0.169907339 container died ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:43 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:43 np0005481680 systemd[1]: var-lib-containers-storage-overlay-41dde29d5804c8d2afa5730995f6c7eb5e9396ee5fcc62fd348bdd5589274e5f-merged.mount: Deactivated successfully.
Oct 12 16:59:43 np0005481680 podman[105577]: 2025-10-12 20:59:43.700358375 +0000 UTC m=+0.227326976 container remove ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:43 np0005481680 systemd[1]: libpod-conmon-ee41f568deb60ede70aaad0190731ff4fcd9e1e96aac5d506a2788535ed68fbe.scope: Deactivated successfully.
Oct 12 16:59:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:43 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe048001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 97 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 97 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=5 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 97 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=6 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:43 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 97 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=73/74 n=6 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 16:59:43 np0005481680 podman[105672]: 2025-10-12 20:59:43.916214535 +0000 UTC m=+0.061996555 container create 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 16:59:43 np0005481680 systemd[1]: Started libpod-conmon-6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148.scope.
Oct 12 16:59:43 np0005481680 podman[105672]: 2025-10-12 20:59:43.89386642 +0000 UTC m=+0.039648450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:43 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:44 np0005481680 podman[105672]: 2025-10-12 20:59:44.008280462 +0000 UTC m=+0.154062482 container init 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:44 np0005481680 podman[105672]: 2025-10-12 20:59:44.021587655 +0000 UTC m=+0.167369665 container start 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 16:59:44 np0005481680 podman[105672]: 2025-10-12 20:59:44.026100521 +0000 UTC m=+0.171882501 container attach 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 16:59:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v38: 337 pgs: 337 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 12 16:59:44 np0005481680 python3.9[105766]: ansible-ansible.builtin.service_facts Invoked
Oct 12 16:59:44 np0005481680 network[105789]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 16:59:44 np0005481680 network[105791]: 'network-scripts' will be removed from distribution in near future.
Oct 12 16:59:44 np0005481680 network[105792]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct 12 16:59:44 np0005481680 gifted_mirzakhani[105731]: --> passed data devices: 0 physical, 1 LVM
Oct 12 16:59:44 np0005481680 gifted_mirzakhani[105731]: --> All data devices are unavailable
Oct 12 16:59:44 np0005481680 podman[105672]: 2025-10-12 20:59:44.453411418 +0000 UTC m=+0.599193438 container died 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:59:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:44.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:44.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 98 pg[6.e( v 47'39 (0'0,47'39] local-lis/les=76/77 n=1 ec=51/17 lis/c=76/76 les/c/f=77/77/0 sis=98 pruub=8.316335678s) [1] r=-1 lpr=98 pi=[76,98)/1 crt=47'39 mlcod 47'39 active pruub 240.351638794s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 98 pg[6.e( v 47'39 (0'0,47'39] local-lis/les=76/77 n=1 ec=51/17 lis/c=76/76 les/c/f=77/77/0 sis=98 pruub=8.316271782s) [1] r=-1 lpr=98 pi=[76,98)/1 crt=47'39 mlcod 0'0 unknown NOTIFY pruub 240.351638794s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 98 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=5 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:44 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 98 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=6 ec=55/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[73,97)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 12 16:59:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 12 16:59:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:45 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040001fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:45 np0005481680 systemd[1]: libpod-6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148.scope: Deactivated successfully.
Oct 12 16:59:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fa65ca3932956f43d5bb8dc1e93c9d60c3a1745e410a0c90f51299f3f2dda7d1-merged.mount: Deactivated successfully.
Oct 12 16:59:45 np0005481680 podman[105672]: 2025-10-12 20:59:45.081376043 +0000 UTC m=+1.227158053 container remove 6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 16:59:45 np0005481680 systemd[1]: libpod-conmon-6d9f743721a364ddabf34a98e7d38cb79e035c8f957c646f51c3e8c1382e0148.scope: Deactivated successfully.
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct 12 16:59:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:45 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:45 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 12 16:59:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 12 16:59:45 np0005481680 podman[105918]: 2025-10-12 20:59:45.881599189 +0000 UTC m=+0.071243023 container create bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 99 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=6 ec=55/42 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=15.007000923s) [1] async=[1] r=-1 lpr=99 pi=[73,99)/1 crt=48'1034 mlcod 48'1034 active pruub 248.048004150s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:45 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 99 pg[10.d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=6 ec=55/42 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=15.006903648s) [1] r=-1 lpr=99 pi=[73,99)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 248.048004150s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 99 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=5 ec=55/42 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=15.005217552s) [1] async=[1] r=-1 lpr=99 pi=[73,99)/1 crt=48'1034 mlcod 48'1034 active pruub 248.047988892s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 16:59:45 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 99 pg[10.1d( v 48'1034 (0'0,48'1034] local-lis/les=97/98 n=5 ec=55/42 lis/c=97/73 les/c/f=98/74/0 sis=99 pruub=15.004920006s) [1] r=-1 lpr=99 pi=[73,99)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 248.047988892s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 16:59:45 np0005481680 systemd[1]: Started libpod-conmon-bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7.scope.
Oct 12 16:59:45 np0005481680 podman[105918]: 2025-10-12 20:59:45.855006735 +0000 UTC m=+0.044650599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:45 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:45 np0005481680 podman[105918]: 2025-10-12 20:59:45.997748186 +0000 UTC m=+0.187392050 container init bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:46 np0005481680 podman[105918]: 2025-10-12 20:59:46.009096507 +0000 UTC m=+0.198740331 container start bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:46 np0005481680 podman[105918]: 2025-10-12 20:59:46.012989408 +0000 UTC m=+0.202633242 container attach bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 16:59:46 np0005481680 awesome_wing[105934]: 167 167
Oct 12 16:59:46 np0005481680 systemd[1]: libpod-bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7.scope: Deactivated successfully.
Oct 12 16:59:46 np0005481680 podman[105918]: 2025-10-12 20:59:46.017997486 +0000 UTC m=+0.207641310 container died bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 16:59:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bdd3149e10b7bea8bb8af83d4440db505a39f74754c2c618f3c489e2c3deaf5e-merged.mount: Deactivated successfully.
Oct 12 16:59:46 np0005481680 podman[105918]: 2025-10-12 20:59:46.073641927 +0000 UTC m=+0.263285761 container remove bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_wing, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:59:46 np0005481680 systemd[1]: libpod-conmon-bf0097fe11f76faf05d5743dbb81a5ae25fc06a9eb8ac00456b89541ee4063b7.scope: Deactivated successfully.
Oct 12 16:59:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v41: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.28881588 +0000 UTC m=+0.057295274 container create 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 16:59:46 np0005481680 systemd[1]: Started libpod-conmon-95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5.scope.
Oct 12 16:59:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a76d0343db6f9088ee53e8f9c549f113062bc4cf63d32b8527f9b42c67f77ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a76d0343db6f9088ee53e8f9c549f113062bc4cf63d32b8527f9b42c67f77ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a76d0343db6f9088ee53e8f9c549f113062bc4cf63d32b8527f9b42c67f77ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a76d0343db6f9088ee53e8f9c549f113062bc4cf63d32b8527f9b42c67f77ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.269989905 +0000 UTC m=+0.038469309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:46 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.e scrub starts
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.373661531 +0000 UTC m=+0.142140915 container init 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 16:59:46 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.e scrub ok
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.388315438 +0000 UTC m=+0.156794822 container start 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.391768206 +0000 UTC m=+0.160247590 container attach 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 16:59:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]: {
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:    "0": [
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:        {
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "devices": [
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "/dev/loop3"
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            ],
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "lv_name": "ceph_lv0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "lv_size": "21470642176",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "name": "ceph_lv0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "tags": {
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.cephx_lockbox_secret": "",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.cluster_name": "ceph",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.crush_device_class": "",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.encrypted": "0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.osd_id": "0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.type": "block",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.vdo": "0",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:                "ceph.with_tpm": "0"
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            },
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "type": "block",
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:            "vg_name": "ceph_vg0"
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:        }
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]:    ]
Oct 12 16:59:46 np0005481680 focused_brahmagupta[105974]: }
Oct 12 16:59:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 16:59:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 16:59:46 np0005481680 systemd[1]: libpod-95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5.scope: Deactivated successfully.
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.744631429 +0000 UTC m=+0.513110853 container died 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:59:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T20:59:46.757Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.0038475s
Oct 12 16:59:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0a76d0343db6f9088ee53e8f9c549f113062bc4cf63d32b8527f9b42c67f77ab-merged.mount: Deactivated successfully.
Oct 12 16:59:46 np0005481680 podman[105957]: 2025-10-12 20:59:46.804022797 +0000 UTC m=+0.572502211 container remove 95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 16:59:46 np0005481680 systemd[1]: libpod-conmon-95a346e98be44786d6bc3da6501c7c2caeae5e654ee43dabf74fc840b402f5c5.scope: Deactivated successfully.
Oct 12 16:59:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 12 16:59:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 12 16:59:46 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 12 16:59:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:47 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe03c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 12 16:59:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 12 16:59:47 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.650284216 +0000 UTC m=+0.067566568 container create faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 16:59:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[96714]: 12/10/2025 20:59:47 : epoch 68ec1666 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe040002940 fd 47 proxy ignored for local
Oct 12 16:59:47 np0005481680 kernel: ganesha.nfsd[103630]: segfault at 50 ip 00007fe11bc1032e sp 00007fe0e8ff8210 error 4 in libntirpc.so.5.8[7fe11bbf5000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 12 16:59:47 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 16:59:47 np0005481680 systemd[1]: Started libpod-conmon-faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2.scope.
Oct 12 16:59:47 np0005481680 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.627743176 +0000 UTC m=+0.045025558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:47 np0005481680 systemd[1]: Started Process Core Dump (PID 106147/UID 0).
Oct 12 16:59:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.789997378 +0000 UTC m=+0.207279760 container init faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.800654423 +0000 UTC m=+0.217936785 container start faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.804755058 +0000 UTC m=+0.222037480 container attach faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 16:59:47 np0005481680 musing_tharp[106149]: 167 167
Oct 12 16:59:47 np0005481680 systemd[1]: libpod-faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2.scope: Deactivated successfully.
Oct 12 16:59:47 np0005481680 conmon[106149]: conmon faab67bcae79d8e23727 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2.scope/container/memory.events
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.810602258 +0000 UTC m=+0.227884620 container died faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b56c50d39e243b8f8f794d43a079a057521be090f1f46dfb5af04db118da3991-merged.mount: Deactivated successfully.
Oct 12 16:59:47 np0005481680 podman[106127]: 2025-10-12 20:59:47.87641823 +0000 UTC m=+0.293700592 container remove faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 16:59:47 np0005481680 systemd[1]: libpod-conmon-faab67bcae79d8e237272b4a0350e5486eca9ba0d9f2af24d0903bb9483164a2.scope: Deactivated successfully.
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.076975097 +0000 UTC m=+0.066720097 container create 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 16:59:48 np0005481680 systemd[1]: Started libpod-conmon-990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97.scope.
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v43: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.052338283 +0000 UTC m=+0.042083363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 16:59:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd14067ff66ddc6fd044c00975d72ac163df6947cf04341e2f00a0d24512f5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd14067ff66ddc6fd044c00975d72ac163df6947cf04341e2f00a0d24512f5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd14067ff66ddc6fd044c00975d72ac163df6947cf04341e2f00a0d24512f5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd14067ff66ddc6fd044c00975d72ac163df6947cf04341e2f00a0d24512f5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.185884707 +0000 UTC m=+0.175629727 container init 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.197091966 +0000 UTC m=+0.186836986 container start 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.211945357 +0000 UTC m=+0.201690437 container attach 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 16:59:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 16:59:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 16:59:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 16:59:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 12 16:59:48 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 12 16:59:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:48 np0005481680 systemd-coredump[106151]: Process 96718 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007fe11bc1032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 16:59:48 np0005481680 lvm[106296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 16:59:48 np0005481680 lvm[106296]: VG ceph_vg0 finished
Oct 12 16:59:48 np0005481680 musing_satoshi[106203]: {}
Oct 12 16:59:48 np0005481680 systemd[1]: libpod-990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97.scope: Deactivated successfully.
Oct 12 16:59:48 np0005481680 podman[106181]: 2025-10-12 20:59:48.978753354 +0000 UTC m=+0.968498354 container died 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 16:59:48 np0005481680 systemd[1]: libpod-990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97.scope: Consumed 1.228s CPU time.
Oct 12 16:59:48 np0005481680 systemd[1]: systemd-coredump@0-106147-0.service: Deactivated successfully.
Oct 12 16:59:48 np0005481680 systemd[1]: systemd-coredump@0-106147-0.service: Consumed 1.180s CPU time.
Oct 12 16:59:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2fd14067ff66ddc6fd044c00975d72ac163df6947cf04341e2f00a0d24512f5f-merged.mount: Deactivated successfully.
Oct 12 16:59:49 np0005481680 podman[106181]: 2025-10-12 20:59:49.037031132 +0000 UTC m=+1.026776122 container remove 990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:59:49 np0005481680 podman[106310]: 2025-10-12 20:59:49.061189453 +0000 UTC m=+0.038543632 container died faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:59:49 np0005481680 systemd[1]: libpod-conmon-990b7e24f100c416ab8bf07b21a92ba04b49e015144e9be5b4916ea1a2204c97.scope: Deactivated successfully.
Oct 12 16:59:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-775069da103ffa31071d62e0068a918b0346ff2de7a078bc2d094072e6522f81-merged.mount: Deactivated successfully.
Oct 12 16:59:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 16:59:49 np0005481680 podman[106310]: 2025-10-12 20:59:49.112542144 +0000 UTC m=+0.089896293 container remove faf504f2bb2d8aa4a3b5484040e830773476715f3a21e8ff1b547e38b7e094ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 16:59:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 16:59:49 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 16:59:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:49 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 16:59:49 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.878s CPU time.
Oct 12 16:59:49 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Oct 12 16:59:49 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Oct 12 16:59:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 16:59:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v44: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Oct 12 16:59:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 12 16:59:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Oct 12 16:59:50 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Oct 12 16:59:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 12 16:59:51 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 12 16:59:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 12 16:59:51 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 12 16:59:51 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 101 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/17 lis/c=63/63 les/c/f=64/64/0 sis=101) [0] r=0 lpr=101 pi=[63,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 16:59:52 np0005481680 python3.9[106549]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 16:59:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:52] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 16:59:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:20:59:52] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 16:59:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v46: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 12 16:59:52 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 12 16:59:52 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 102 pg[6.f( v 47'39 lc 46'1 (0'0,47'39] local-lis/les=101/102 n=3 ec=51/17 lis/c=63/63 les/c/f=64/64/0 sis=101) [0] r=0 lpr=101 pi=[63,101)/1 crt=47'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 16:59:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Oct 12 16:59:52 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Oct 12 16:59:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:52.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:52 np0005481680 python3.9[106699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:59:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 12 16:59:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 12 16:59:53 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 12 16:59:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 12 16:59:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 12 16:59:53 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 12 16:59:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/205953 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 16:59:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v49: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 12 16:59:54 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 12 16:59:54 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 12 16:59:54 np0005481680 python3.9[106880]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 16:59:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 16:59:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:54.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 12 16:59:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 12 16:59:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct 12 16:59:55 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct 12 16:59:55 np0005481680 python3.9[107039]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 16:59:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 12 16:59:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 12 16:59:55 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 12 16:59:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 1 remapped+peering, 2 active+remapped, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:56 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.a scrub starts
Oct 12 16:59:56 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 12.a scrub ok
Oct 12 16:59:56 np0005481680 python3.9[107124]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 16:59:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 12 16:59:57 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 12 16:59:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v54: 337 pgs: 1 remapped+peering, 2 active+remapped, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 16:59:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 12 16:59:58 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 12 16:59:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 16:59:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:20:59:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 16:59:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 16:59:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 16:59:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:20:59:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 16:59:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 12 16:59:59 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 12 16:59:59 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 1.
Oct 12 16:59:59 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 16:59:59 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.878s CPU time.
Oct 12 16:59:59 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 16:59:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 16:59:59 np0005481680 podman[107244]: 2025-10-12 20:59:59.865229596 +0000 UTC m=+0.069702372 container create 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 16:59:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba976124df770b21aa5fd8a91bf06939177461671478d3de461bacab7579deb6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba976124df770b21aa5fd8a91bf06939177461671478d3de461bacab7579deb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba976124df770b21aa5fd8a91bf06939177461671478d3de461bacab7579deb6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba976124df770b21aa5fd8a91bf06939177461671478d3de461bacab7579deb6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 16:59:59 np0005481680 podman[107244]: 2025-10-12 20:59:59.834800835 +0000 UTC m=+0.039273651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 16:59:59 np0005481680 podman[107244]: 2025-10-12 20:59:59.948238752 +0000 UTC m=+0.152711558 container init 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 16:59:59 np0005481680 podman[107244]: 2025-10-12 20:59:59.96183746 +0000 UTC m=+0.166310236 container start 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Oct 12 16:59:59 np0005481680 bash[107244]: 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b
Oct 12 16:59:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 20:59:59 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 16:59:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 20:59:59 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 16:59:59 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:00:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:00 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:00:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 17:00:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 12 17:00:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 12 17:00:00 np0005481680 ceph-osd[81892]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 12 17:00:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:00.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:01 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 12 17:00:01 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 107 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=15.164162636s) [2] r=-1 lpr=107 pi=[66,107)/1 crt=48'1034 mlcod 0'0 active pruub 264.011749268s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:01 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 107 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=107 pruub=15.164005280s) [2] r=-1 lpr=107 pi=[66,107)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 264.011749268s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:02] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:00:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:02] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:00:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 12 17:00:02 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 12 17:00:02 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 108 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=108) [0] r=0 lpr=108 pi=[62,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:02 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 108 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=108) [2]/[0] r=0 lpr=108 pi=[66,108)/1 crt=48'1034 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:02 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 108 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=66/67 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=108) [2]/[0] r=0 lpr=108 pi=[66,108)/1 crt=48'1034 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 12 17:00:03 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 109 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:03 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 109 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=62/62 les/c/f=63/63/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[62,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:03 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 109 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=108/109 n=4 ec=55/42 lis/c=66/66 les/c/f=67/67/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[66,108)/1 crt=48'1034 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 17:00:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 12 17:00:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 110 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=108/109 n=4 ec=55/42 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.965065956s) [2] async=[2] r=-1 lpr=110 pi=[66,110)/1 crt=48'1034 mlcod 48'1034 active pruub 266.491546631s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 110 pg[10.12( v 48'1034 (0'0,48'1034] local-lis/les=108/109 n=4 ec=55/42 lis/c=108/66 les/c/f=109/67/0 sis=110 pruub=14.964916229s) [2] r=-1 lpr=110 pi=[66,110)/1 crt=48'1034 mlcod 0'0 unknown NOTIFY pruub 266.491546631s@ mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 110 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=71/71 les/c/f=72/72/0 sis=110) [0] r=0 lpr=110 pi=[71,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 12 17:00:04 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 111 pg[10.13( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=109/62 les/c/f=110/63/0 sis=111) [0] r=0 lpr=111 pi=[62,111)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 111 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=71/71 les/c/f=72/72/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[71,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 111 pg[10.13( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=109/62 les/c/f=110/63/0 sis=111) [0] r=0 lpr=111 pi=[62,111)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:04 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 111 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=71/71 les/c/f=72/72/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[71,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210005 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:00:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 12 17:00:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 12 17:00:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 12 17:00:05 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 12 17:00:05 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 112 pg[10.13( v 48'1034 (0'0,48'1034] local-lis/les=111/112 n=5 ec=55/42 lis/c=109/62 les/c/f=110/63/0 sis=111) [0] r=0 lpr=111 pi=[62,111)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 17:00:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:06 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:00:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:06 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:00:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:06 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:00:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 1 remapped+peering, 1 active+remapped, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 12 17:00:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 12 17:00:06 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 12 17:00:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 113 pg[10.14( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=111/71 les/c/f=112/72/0 sis=113) [0] r=0 lpr=113 pi=[71,113)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:06 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 113 pg[10.14( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=5 ec=55/42 lis/c=111/71 les/c/f=112/72/0 sis=113) [0] r=0 lpr=113 pi=[71,113)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:06.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 12 17:00:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 12 17:00:07 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 12 17:00:07 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 114 pg[10.14( v 48'1034 (0'0,48'1034] local-lis/les=113/114 n=5 ec=55/42 lis/c=111/71 les/c/f=112/72/0 sis=113) [0] r=0 lpr=113 pi=[71,113)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 17:00:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 1 remapped+peering, 1 active+remapped, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:08.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:08.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 17:00:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s; 20 B/s, 1 objects/s recovering
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 12 17:00:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:10.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 12 17:00:10 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:00:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:00:11 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 12 17:00:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:00:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:00:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 12 17:00:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:12.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:12.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 12 17:00:12 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 12 17:00:13 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 12 17:00:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 964 B/s rd, 0 op/s; 17 B/s, 1 objects/s recovering
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 12 17:00:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 12 17:00:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:14.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 12 17:00:14 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 12 17:00:15 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 12 17:00:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 12 17:00:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:16.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 12 17:00:16 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 12 17:00:17 np0005481680 ceph-mgr[73901]: [dashboard INFO request] [192.168.122.100:45930] [POST] [200] [0.140s] [4.0B] [5c01c626-49a2-4274-95f7-6a33b5005aab] /api/prometheus_receiver
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3578000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:17 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:17 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:00:18
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', '.rgw.root', '.mgr', 'backups', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:00:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:00:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:18.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 12 17:00:18 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 12 17:00:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:19 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:19 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 119 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=88/88 les/c/f=89/89/0 sis=119) [0] r=0 lpr=119 pi=[88,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210019 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:00:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:19 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 12 17:00:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 12 17:00:19 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 12 17:00:19 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 120 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=88/88 les/c/f=89/89/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[88,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:19 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 120 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=88/88 les/c/f=89/89/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[88,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:19 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 12 17:00:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 12 17:00:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:20 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:00:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:20 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:00:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 12 17:00:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 12 17:00:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 12 17:00:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:21 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:21 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 12 17:00:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:21 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 12 17:00:21 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 12 17:00:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 122 pg[10.19( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=7 ec=55/42 lis/c=120/88 les/c/f=121/89/0 sis=122) [0] r=0 lpr=122 pi=[88,122)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:21 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 122 pg[10.19( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=7 ec=55/42 lis/c=120/88 les/c/f=121/89/0 sis=122) [0] r=0 lpr=122 pi=[88,122)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:22] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Oct 12 17:00:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:22] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Oct 12 17:00:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 12 17:00:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:22.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 12 17:00:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=92/92 les/c/f=93/93/0 sis=123) [0] r=0 lpr=123 pi=[92,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:22 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 123 pg[10.19( v 48'1034 (0'0,48'1034] local-lis/les=122/123 n=7 ec=55/42 lis/c=120/88 les/c/f=121/89/0 sis=122) [0] r=0 lpr=122 pi=[88,122)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 12 17:00:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 12 17:00:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:23 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:23 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:00:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:23 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:23 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 12 17:00:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 12 17:00:23 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 12 17:00:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 124 pg[10.1b( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=92/92 les/c/f=93/93/0 sis=124) [0]/[1] r=-1 lpr=124 pi=[92,124)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:23 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 124 pg[10.1b( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=92/92 les/c/f=93/93/0 sis=124) [0]/[1] r=-1 lpr=124 pi=[92,124)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 12 17:00:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 12 17:00:24 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 12 17:00:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 12 17:00:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 12 17:00:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:25 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:25 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:25 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 12 17:00:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 12 17:00:26 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 12 17:00:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 126 pg[10.1b( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=2 ec=55/42 lis/c=124/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 luod=0'0 crt=48'1034 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 12 17:00:26 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 126 pg[10.1b( v 48'1034 (0'0,48'1034] local-lis/les=0/0 n=2 ec=55/42 lis/c=124/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 crt=48'1034 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 12 17:00:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Oct 12 17:00:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:26.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:26.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:00:26.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:00:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 12 17:00:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:27 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 12 17:00:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210027 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:00:27 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 12 17:00:27 np0005481680 ceph-osd[81892]: osd.0 pg_epoch: 127 pg[10.1b( v 48'1034 (0'0,48'1034] local-lis/les=126/127 n=2 ec=55/42 lis/c=124/92 les/c/f=125/93/0 sis=126) [0] r=0 lpr=126 pi=[92,126)/1 crt=48'1034 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 12 17:00:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:27 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:27 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 711 B/s wr, 3 op/s; 25 B/s, 1 objects/s recovering
Oct 12 17:00:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:28.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:29 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:29 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:29 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 2 op/s; 36 B/s, 1 objects/s recovering
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 12 17:00:30 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 12 17:00:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:30.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:30.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:31 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 12 17:00:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:31 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:31 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:32] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Oct 12 17:00:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:32] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Oct 12 17:00:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 12 17:00:32 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 12 17:00:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:32.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:32.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:33 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 12 17:00:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 12 17:00:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:33 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:33 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.294729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834294772, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2917, "num_deletes": 252, "total_data_size": 6369405, "memory_usage": 6587368, "flush_reason": "Manual Compaction"}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834329540, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6033212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7810, "largest_seqno": 10726, "table_properties": {"data_size": 6019326, "index_size": 8844, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3973, "raw_key_size": 36330, "raw_average_key_size": 22, "raw_value_size": 5988387, "raw_average_value_size": 3773, "num_data_blocks": 385, "num_entries": 1587, "num_filter_entries": 1587, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302736, "oldest_key_time": 1760302736, "file_creation_time": 1760302834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 35144 microseconds, and 19427 cpu microseconds.
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.329871) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6033212 bytes OK
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.330047) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.331841) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.331868) EVENT_LOG_v1 {"time_micros": 1760302834331860, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.331892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6355766, prev total WAL file size 6355766, number of live WAL files 2.
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.335108) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5891KB)], [23(10MB)]
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834335159, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 16925609, "oldest_snapshot_seqno": -1}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4031 keys, 13121748 bytes, temperature: kUnknown
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834407984, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 13121748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13089674, "index_size": 20892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 102955, "raw_average_key_size": 25, "raw_value_size": 13010824, "raw_average_value_size": 3227, "num_data_blocks": 902, "num_entries": 4031, "num_filter_entries": 4031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760302834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.408289) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13121748 bytes
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.409674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.0 rd, 179.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.8, 10.4 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(5.0) write-amplify(2.2) OK, records in: 4570, records dropped: 539 output_compression: NoCompression
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.409707) EVENT_LOG_v1 {"time_micros": 1760302834409693, "job": 8, "event": "compaction_finished", "compaction_time_micros": 72941, "compaction_time_cpu_micros": 46769, "output_level": 6, "num_output_files": 1, "total_output_size": 13121748, "num_input_records": 4570, "num_output_records": 4031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834411520, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760302834415280, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.335009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.415388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.415396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.415399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.415402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:00:34.415405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 12 17:00:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:34.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:34 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 12 17:00:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:35 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:35 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 12 17:00:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 12 17:00:35 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 12 17:00:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:35 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 788 B/s rd, 0 op/s
Oct 12 17:00:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:36.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 12 17:00:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 12 17:00:36 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 12 17:00:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:00:36.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:00:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:37 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:37 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 12 17:00:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 12 17:00:37 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 12 17:00:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:37 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:00:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:38.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:39 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:39 np0005481680 python3.9[107635]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:00:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:39 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:39 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 1 objects/s recovering
Oct 12 17:00:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:40.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:41 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:41 np0005481680 python3.9[107924]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 12 17:00:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:41 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:41 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:42] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:00:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:42] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:00:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v105: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Oct 12 17:00:42 np0005481680 python3.9[108077]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 12 17:00:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:42.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:42.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:43 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:43 np0005481680 python3.9[108230]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:00:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:43 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:43 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct 12 17:00:44 np0005481680 python3.9[108383]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 12 17:00:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:44.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:45 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:45 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:45 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:45 np0005481680 python3.9[108537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:00:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v107: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 327 B/s rd, 0 op/s; 11 B/s, 1 objects/s recovering
Oct 12 17:00:46 np0005481680 python3.9[108689]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:00:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:46.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:46.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:00:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:00:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:47 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:47 np0005481680 python3.9[108767]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:00:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:47 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f356c002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:47 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 296 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Oct 12 17:00:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:00:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f37bfdd90d0>)]
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f37bfdd9040>)]
Oct 12 17:00:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct 12 17:00:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:48.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:48.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:48 np0005481680 python3.9[108921]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 12 17:00:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:49 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:49 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:49 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:49 np0005481680 python3.9[109128]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 12 17:00:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Oct 12 17:00:50 np0005481680 podman[109272]: 2025-10-12 21:00:50.33273741 +0000 UTC m=+0.091478161 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 12 17:00:50 np0005481680 podman[109272]: 2025-10-12 21:00:50.448612165 +0000 UTC m=+0.207352916 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct 12 17:00:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:50.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:50 np0005481680 python3.9[109419]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 17:00:50 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.fmjeht(active, since 92s), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 17:00:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:51 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:51 np0005481680 podman[109492]: 2025-10-12 21:00:51.121286354 +0000 UTC m=+0.070348781 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:51 np0005481680 podman[109492]: 2025-10-12 21:00:51.135527584 +0000 UTC m=+0.084590011 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:51 np0005481680 podman[109678]: 2025-10-12 21:00:51.551202558 +0000 UTC m=+0.069380536 container exec 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:00:51 np0005481680 podman[109678]: 2025-10-12 21:00:51.572387949 +0000 UTC m=+0.090565877 container exec_died 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:00:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:51 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:51 np0005481680 python3.9[109728]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 12 17:00:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:51 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:51 np0005481680 podman[109771]: 2025-10-12 21:00:51.831770276 +0000 UTC m=+0.064629332 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:00:51 np0005481680 podman[109771]: 2025-10-12 21:00:51.865568436 +0000 UTC m=+0.098427532 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:00:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:00:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:00:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:00:52 np0005481680 podman[109861]: 2025-10-12 21:00:52.104301936 +0000 UTC m=+0.069079547 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, distribution-scope=public, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 12 17:00:52 np0005481680 podman[109861]: 2025-10-12 21:00:52.145632902 +0000 UTC m=+0.110410443 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Oct 12 17:00:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:00:52 np0005481680 podman[109980]: 2025-10-12 21:00:52.449422555 +0000 UTC m=+0.075460345 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:52 np0005481680 podman[109980]: 2025-10-12 21:00:52.50808919 +0000 UTC m=+0.134126990 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:52 np0005481680 podman[110128]: 2025-10-12 21:00:52.804659715 +0000 UTC m=+0.063495362 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:00:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:52 np0005481680 python3.9[110113]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:00:52 np0005481680 podman[110128]: 2025-10-12 21:00:52.99935249 +0000 UTC m=+0.258188137 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:00:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:53 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:53 np0005481680 podman[110243]: 2025-10-12 21:00:53.424747036 +0000 UTC m=+0.064801196 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:53 np0005481680 podman[110243]: 2025-10-12 21:00:53.469632394 +0000 UTC m=+0.109686504 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:53 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:53 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3540000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:00:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:54.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:00:54 np0005481680 python3.9[110605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:00:54 np0005481680 podman[110632]: 2025-10-12 21:00:54.963608089 +0000 UTC m=+0.061964293 container create e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:00:55 np0005481680 systemd[1]: Started libpod-conmon-e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c.scope.
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:54.932412057 +0000 UTC m=+0.030768321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:55.077677917 +0000 UTC m=+0.176034171 container init e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:55.088604791 +0000 UTC m=+0.186961005 container start e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:55.092165973 +0000 UTC m=+0.190522187 container attach e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:00:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:55 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:55 np0005481680 charming_mirzakhani[110672]: 167 167
Oct 12 17:00:55 np0005481680 systemd[1]: libpod-e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c.scope: Deactivated successfully.
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:55.097255296 +0000 UTC m=+0.195611560 container died e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:00:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-47002177da6e1b0fa1ba28c26e5bc4ded9a00d1719c30e419f09db91dc9c95ad-merged.mount: Deactivated successfully.
Oct 12 17:00:55 np0005481680 podman[110632]: 2025-10-12 21:00:55.155171122 +0000 UTC m=+0.253527336 container remove e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:00:55 np0005481680 systemd[1]: libpod-conmon-e538ae9562a0ff12ca106d84a18a0a5ba39027642d3b595438610fe88e3a483c.scope: Deactivated successfully.
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.38146047 +0000 UTC m=+0.062961309 container create dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:00:55 np0005481680 systemd[1]: Started libpod-conmon-dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6.scope.
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.351262734 +0000 UTC m=+0.032763613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.478019782 +0000 UTC m=+0.159520651 container init dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.497157869 +0000 UTC m=+0.178658678 container start dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.501209434 +0000 UTC m=+0.182710273 container attach dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:00:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:55 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:55 np0005481680 python3.9[110844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:00:55 np0005481680 optimistic_mcclintock[110811]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:00:55 np0005481680 optimistic_mcclintock[110811]: --> All data devices are unavailable
Oct 12 17:00:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:55 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:55 np0005481680 systemd[1]: libpod-dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6.scope: Deactivated successfully.
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.836258521 +0000 UTC m=+0.517759320 container died dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:00:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-349bcc641e122afab4d7404e35ed0f7ff46e988ee97d344285ce293c3223b6a2-merged.mount: Deactivated successfully.
Oct 12 17:00:55 np0005481680 podman[110747]: 2025-10-12 21:00:55.882640458 +0000 UTC m=+0.564141257 container remove dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:00:55 np0005481680 systemd[1]: libpod-conmon-dda9f820e0be275602b0483ad32c8f4b7c4e204ed46c3f9da49f5655804015e6.scope: Deactivated successfully.
Oct 12 17:00:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:00:56 np0005481680 python3.9[110995]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.470601613 +0000 UTC m=+0.053023710 container create 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:00:56 np0005481680 systemd[1]: Started libpod-conmon-88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089.scope.
Oct 12 17:00:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.446138326 +0000 UTC m=+0.028560433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.552674027 +0000 UTC m=+0.135096174 container init 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.564430154 +0000 UTC m=+0.146852211 container start 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.568022287 +0000 UTC m=+0.150444434 container attach 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 17:00:56 np0005481680 interesting_johnson[111102]: 167 167
Oct 12 17:00:56 np0005481680 systemd[1]: libpod-88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089.scope: Deactivated successfully.
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.572334039 +0000 UTC m=+0.154756126 container died 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 17:00:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f0f90a7c5537fbb8cc77c6bfabcaf45f2cebf303fc998843756102889fa6894e-merged.mount: Deactivated successfully.
Oct 12 17:00:56 np0005481680 podman[111063]: 2025-10-12 21:00:56.63347829 +0000 UTC m=+0.215900357 container remove 88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:00:56 np0005481680 systemd[1]: libpod-conmon-88d89db5641c8516f2a441a1e915d9adaccdab67218832f3cff5325786c16089.scope: Deactivated successfully.
Oct 12 17:00:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:56.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:56 np0005481680 podman[111196]: 2025-10-12 21:00:56.86564689 +0000 UTC m=+0.069735346 container create b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:00:56 np0005481680 systemd[1]: Started libpod-conmon-b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664.scope.
Oct 12 17:00:56 np0005481680 podman[111196]: 2025-10-12 21:00:56.838921604 +0000 UTC m=+0.043010110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:00:56.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:00:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4545654da66deed51398e125ca58ea70f54801eb59fd41ff01b4c7504145403f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4545654da66deed51398e125ca58ea70f54801eb59fd41ff01b4c7504145403f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4545654da66deed51398e125ca58ea70f54801eb59fd41ff01b4c7504145403f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4545654da66deed51398e125ca58ea70f54801eb59fd41ff01b4c7504145403f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:57 np0005481680 podman[111196]: 2025-10-12 21:00:57.007657553 +0000 UTC m=+0.211746039 container init b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:00:57 np0005481680 podman[111196]: 2025-10-12 21:00:57.023737132 +0000 UTC m=+0.227825588 container start b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:00:57 np0005481680 podman[111196]: 2025-10-12 21:00:57.028020074 +0000 UTC m=+0.232108570 container attach b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:00:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:57 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:57 np0005481680 python3.9[111248]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]: {
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:    "0": [
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:        {
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "devices": [
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "/dev/loop3"
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            ],
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "lv_name": "ceph_lv0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "lv_size": "21470642176",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "name": "ceph_lv0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "tags": {
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.cluster_name": "ceph",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.crush_device_class": "",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.encrypted": "0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.osd_id": "0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.type": "block",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.vdo": "0",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:                "ceph.with_tpm": "0"
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            },
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "type": "block",
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:            "vg_name": "ceph_vg0"
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:        }
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]:    ]
Oct 12 17:00:57 np0005481680 festive_ramanujan[111246]: }
Oct 12 17:00:57 np0005481680 systemd[1]: libpod-b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664.scope: Deactivated successfully.
Oct 12 17:00:57 np0005481680 conmon[111246]: conmon b6bf5dd6575ca7f11876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664.scope/container/memory.events
Oct 12 17:00:57 np0005481680 podman[111196]: 2025-10-12 21:00:57.417481105 +0000 UTC m=+0.621569581 container died b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:00:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4545654da66deed51398e125ca58ea70f54801eb59fd41ff01b4c7504145403f-merged.mount: Deactivated successfully.
Oct 12 17:00:57 np0005481680 podman[111196]: 2025-10-12 21:00:57.473088642 +0000 UTC m=+0.677177098 container remove b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_ramanujan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:00:57 np0005481680 systemd[1]: libpod-conmon-b6bf5dd6575ca7f11876f9cd407824066a36a3438c5804937f0ae452e8d88664.scope: Deactivated successfully.
Oct 12 17:00:57 np0005481680 python3.9[111345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:00:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:57 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:57 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.144485497 +0000 UTC m=+0.055808593 container create 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:00:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:00:58 np0005481680 systemd[1]: Started libpod-conmon-926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702.scope.
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.117038413 +0000 UTC m=+0.028361489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.2579863 +0000 UTC m=+0.169309376 container init 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.272523038 +0000 UTC m=+0.183846104 container start 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.276233684 +0000 UTC m=+0.187556760 container attach 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:00:58 np0005481680 quizzical_carson[111491]: 167 167
Oct 12 17:00:58 np0005481680 systemd[1]: libpod-926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702.scope: Deactivated successfully.
Oct 12 17:00:58 np0005481680 conmon[111491]: conmon 926c87178bf414435a40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702.scope/container/memory.events
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.282727504 +0000 UTC m=+0.194050600 container died 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:00:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-08cb0724493a9f767834ae8ae4cd238a6995b3f0a6b6f9746b98849592572a5c-merged.mount: Deactivated successfully.
Oct 12 17:00:58 np0005481680 podman[111463]: 2025-10-12 21:00:58.329781057 +0000 UTC m=+0.241104143 container remove 926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:00:58 np0005481680 systemd[1]: libpod-conmon-926c87178bf414435a40e6207dda87a583683b2b7a2a8c3cc3cb1e5aecf54702.scope: Deactivated successfully.
Oct 12 17:00:58 np0005481680 podman[111583]: 2025-10-12 21:00:58.527389829 +0000 UTC m=+0.062838966 container create aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:00:58 np0005481680 systemd[1]: Started libpod-conmon-aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867.scope.
Oct 12 17:00:58 np0005481680 podman[111583]: 2025-10-12 21:00:58.497706146 +0000 UTC m=+0.033155383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:00:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:00:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a9b8ceec1b5d654ac2742c6a088d3294741889f4781802f3d3c4f9499b7c97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a9b8ceec1b5d654ac2742c6a088d3294741889f4781802f3d3c4f9499b7c97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a9b8ceec1b5d654ac2742c6a088d3294741889f4781802f3d3c4f9499b7c97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a9b8ceec1b5d654ac2742c6a088d3294741889f4781802f3d3c4f9499b7c97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:00:58 np0005481680 podman[111583]: 2025-10-12 21:00:58.637455301 +0000 UTC m=+0.172904468 container init aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:00:58 np0005481680 podman[111583]: 2025-10-12 21:00:58.648747805 +0000 UTC m=+0.184196932 container start aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:00:58 np0005481680 podman[111583]: 2025-10-12 21:00:58.652434301 +0000 UTC m=+0.187883518 container attach aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:00:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:00:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:00:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:00:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:00:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:00:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:00:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:00:58 np0005481680 python3.9[111652]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:00:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:59 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:59 np0005481680 lvm[111726]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:00:59 np0005481680 lvm[111726]: VG ceph_vg0 finished
Oct 12 17:00:59 np0005481680 lvm[111728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:00:59 np0005481680 lvm[111728]: VG ceph_vg0 finished
Oct 12 17:00:59 np0005481680 upbeat_goldstine[111647]: {}
Oct 12 17:00:59 np0005481680 systemd[1]: libpod-aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867.scope: Deactivated successfully.
Oct 12 17:00:59 np0005481680 systemd[1]: libpod-aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867.scope: Consumed 1.331s CPU time.
Oct 12 17:00:59 np0005481680 podman[111583]: 2025-10-12 21:00:59.486829077 +0000 UTC m=+1.022278234 container died aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:00:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-48a9b8ceec1b5d654ac2742c6a088d3294741889f4781802f3d3c4f9499b7c97-merged.mount: Deactivated successfully.
Oct 12 17:00:59 np0005481680 podman[111583]: 2025-10-12 21:00:59.540916624 +0000 UTC m=+1.076365751 container remove aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:00:59 np0005481680 systemd[1]: libpod-conmon-aab590cf0ba3413722ae5e38dfea18a6de84ddbf52d36ad01f4537d232d67867.scope: Deactivated successfully.
Oct 12 17:00:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:00:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:00:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:00:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:59 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:00:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:00:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:00:59 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:01:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:01:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:01:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:00.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:00.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:01 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:01 np0005481680 python3.9[111920]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:01:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:01 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:01 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:01:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:01:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:02 np0005481680 python3.9[112089]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 12 17:01:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:02.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:02.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:03 np0005481680 python3.9[112239]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:01:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:03 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:01:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:01:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:03 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:03 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=infra.usagestats t=2025-10-12T21:01:04.145391846Z level=info msg="Usage stats are ready to report"
Oct 12 17:01:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:04 np0005481680 python3.9[112393]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:01:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:04.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:04.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:05 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3540002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:05 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:05 np0005481680 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 12 17:01:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:05 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:05 np0005481680 systemd[1]: tuned.service: Deactivated successfully.
Oct 12 17:01:05 np0005481680 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 12 17:01:05 np0005481680 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 12 17:01:06 np0005481680 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 12 17:01:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:01:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:06.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:07 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:07 np0005481680 python3.9[112557]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 12 17:01:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:07 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3540002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:07 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:08.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:09 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:09 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:09 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3540002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:10 np0005481680 python3.9[112713]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:01:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:10.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:10.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:11 np0005481680 python3.9[112868]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:01:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:11 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:12] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:12] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:12 np0005481680 systemd[1]: session-40.scope: Deactivated successfully.
Oct 12 17:01:12 np0005481680 systemd[1]: session-40.scope: Consumed 1min 5.501s CPU time.
Oct 12 17:01:12 np0005481680 systemd-logind[783]: Session 40 logged out. Waiting for processes to exit.
Oct 12 17:01:12 np0005481680 systemd-logind[783]: Removed session 40.
Oct 12 17:01:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:12.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:13 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:13 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:13 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3548003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:14.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:14.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:15 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3540003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:15 np0005481680 kernel: ganesha.nfsd[107399]: segfault at 50 ip 00007f3627e2532e sp 00007f35f4ff8210 error 4 in libntirpc.so.5.8[7f3627e0a000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 12 17:01:15 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:01:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[107260]: 12/10/2025 21:01:15 : epoch 68ec16cf : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3554003db0 fd 38 proxy ignored for local
Oct 12 17:01:15 np0005481680 systemd[1]: Started Process Core Dump (PID 112925/UID 0).
Oct 12 17:01:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:01:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:16.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:01:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:16.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:01:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:17 np0005481680 systemd-coredump[112926]: Process 107264 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007f3627e2532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:01:17 np0005481680 systemd[1]: systemd-coredump@1-112925-0.service: Deactivated successfully.
Oct 12 17:01:17 np0005481680 systemd[1]: systemd-coredump@1-112925-0.service: Consumed 1.140s CPU time.
Oct 12 17:01:17 np0005481680 podman[112936]: 2025-10-12 21:01:17.254431501 +0000 UTC m=+0.053011352 container died 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:01:17 np0005481680 systemd-logind[783]: New session 41 of user zuul.
Oct 12 17:01:17 np0005481680 systemd[1]: Started Session 41 of User zuul.
Oct 12 17:01:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ba976124df770b21aa5fd8a91bf06939177461671478d3de461bacab7579deb6-merged.mount: Deactivated successfully.
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:01:18
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.nfs', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data']
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:01:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:01:18 np0005481680 python3.9[113104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:01:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:01:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:01:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:19 np0005481680 podman[112936]: 2025-10-12 21:01:19.120199847 +0000 UTC m=+1.918779658 container remove 65a80e1280b0b0fa9949bc51d1f3d77be1e511e6cf1c47c99bf2843b4a6f857b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:01:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:01:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:01:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.857s CPU time.
Oct 12 17:01:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:20 np0005481680 python3.9[113290]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 12 17:01:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:20.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:20.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:21 np0005481680 python3.9[113444]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:01:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210121 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:01:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:22] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:22] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:22 np0005481680 python3.9[113529]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 12 17:01:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:24.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:24.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:25 np0005481680 python3.9[113684]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:01:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:01:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:26.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:27 np0005481680 python3.9[113840]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:01:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:01:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:29 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 2.
Oct 12 17:01:29 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:01:29 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.857s CPU time.
Oct 12 17:01:29 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:01:29 np0005481680 podman[114046]: 2025-10-12 21:01:29.857802691 +0000 UTC m=+0.109643813 container create 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:01:29 np0005481680 podman[114046]: 2025-10-12 21:01:29.773480299 +0000 UTC m=+0.025321431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:01:29 np0005481680 python3.9[114010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:01:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46121ae449a9102526c3d9095882c8c64be84b8ae00b71a28886ab3c55c5d6e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:01:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46121ae449a9102526c3d9095882c8c64be84b8ae00b71a28886ab3c55c5d6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:01:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46121ae449a9102526c3d9095882c8c64be84b8ae00b71a28886ab3c55c5d6e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:01:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46121ae449a9102526c3d9095882c8c64be84b8ae00b71a28886ab3c55c5d6e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:01:30 np0005481680 podman[114046]: 2025-10-12 21:01:30.006920362 +0000 UTC m=+0.258761504 container init 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:01:30 np0005481680 podman[114046]: 2025-10-12 21:01:30.012808491 +0000 UTC m=+0.264649593 container start 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:01:30 np0005481680 bash[114046]: 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da
Oct 12 17:01:30 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:01:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:01:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:30 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:01:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:30.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:31 np0005481680 python3.9[114254]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 12 17:01:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:32] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:32] "GET /metrics HTTP/1.1" 200 48248 "" "Prometheus/2.51.0"
Oct 12 17:01:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:01:32 np0005481680 python3.9[114406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:01:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:01:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:01:33 np0005481680 python3.9[114565]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:01:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:01:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:35 np0005481680 python3.9[114745]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:01:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:01:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:36 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:01:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:36 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:01:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:36.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:36.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:37 np0005481680 python3.9[115034]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 12 17:01:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:01:38 np0005481680 python3.9[115185]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:01:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:39 np0005481680 python3.9[115339]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:01:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:01:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:41 np0005481680 python3.9[115494]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:42] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:01:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:42] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:01:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:01:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:42 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:01:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:43 np0005481680 python3.9[115665]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:01:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:01:44 np0005481680 python3.9[115820]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 12 17:01:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:44.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9194001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:45 np0005481680 systemd[1]: session-41.scope: Deactivated successfully.
Oct 12 17:01:45 np0005481680 systemd[1]: session-41.scope: Consumed 19.665s CPU time.
Oct 12 17:01:45 np0005481680 systemd-logind[783]: Session 41 logged out. Waiting for processes to exit.
Oct 12 17:01:45 np0005481680 systemd-logind[783]: Removed session 41.
Oct 12 17:01:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210145 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:01:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:01:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:46.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9194001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:01:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:01:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:01:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:01:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:48.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9194001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:01:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:50 np0005481680 systemd-logind[783]: New session 42 of user zuul.
Oct 12 17:01:50 np0005481680 systemd[1]: Started Session 42 of User zuul.
Oct 12 17:01:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:51 np0005481680 python3.9[116006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:01:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:52] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:01:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:01:52] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:01:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:01:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:52.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:53 np0005481680 python3.9[116160]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:01:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9194001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:01:54 np0005481680 python3.9[116380]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:01:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:01:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:01:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:55 np0005481680 systemd-logind[783]: Session 42 logged out. Waiting for processes to exit.
Oct 12 17:01:55 np0005481680 systemd[1]: session-42.scope: Deactivated successfully.
Oct 12 17:01:55 np0005481680 systemd[1]: session-42.scope: Consumed 2.918s CPU time.
Oct 12 17:01:55 np0005481680 systemd-logind[783]: Removed session 42.
Oct 12 17:01:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:01:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:01:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:56.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:01:56.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:01:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:01:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:01:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:01:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:01:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:01:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:01:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:01:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:01:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a00023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:01:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:01:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:00 np0005481680 systemd-logind[783]: New session 43 of user zuul.
Oct 12 17:02:00 np0005481680 systemd[1]: Started Session 43 of User zuul.
Oct 12 17:02:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:00 np0005481680 podman[116561]: 2025-10-12 21:02:00.9717111 +0000 UTC m=+0.108665349 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:02:01 np0005481680 podman[116561]: 2025-10-12 21:02:01.093467979 +0000 UTC m=+0.230422188 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:02:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:01 np0005481680 python3.9[116776]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:02:01 np0005481680 podman[116809]: 2025-10-12 21:02:01.853127338 +0000 UTC m=+0.090199363 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:01 np0005481680 podman[116809]: 2025-10-12 21:02:01.869502481 +0000 UTC m=+0.106574466 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:02] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:02] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:02 np0005481680 podman[116935]: 2025-10-12 21:02:02.406279444 +0000 UTC m=+0.094491011 container exec 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:02:02 np0005481680 podman[116935]: 2025-10-12 21:02:02.427406778 +0000 UTC m=+0.115618325 container exec_died 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:02:02 np0005481680 podman[117093]: 2025-10-12 21:02:02.764153373 +0000 UTC m=+0.074639738 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:02:02 np0005481680 podman[117093]: 2025-10-12 21:02:02.77234457 +0000 UTC m=+0.082830855 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:02:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:02:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:02:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:02 np0005481680 python3.9[117133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:02:03 np0005481680 podman[117191]: 2025-10-12 21:02:03.10375851 +0000 UTC m=+0.069271053 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=)
Oct 12 17:02:03 np0005481680 podman[117191]: 2025-10-12 21:02:03.123405197 +0000 UTC m=+0.088917700 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 12 17:02:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:02:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:02:03 np0005481680 podman[117276]: 2025-10-12 21:02:03.434509643 +0000 UTC m=+0.088167970 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:03 np0005481680 podman[117276]: 2025-10-12 21:02:03.492349825 +0000 UTC m=+0.146008092 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:03 np0005481680 podman[117423]: 2025-10-12 21:02:03.813352983 +0000 UTC m=+0.087245038 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:02:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:03 np0005481680 podman[117423]: 2025-10-12 21:02:03.975837241 +0000 UTC m=+0.249729276 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:02:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:04 np0005481680 python3.9[117509]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:02:04 np0005481680 podman[117594]: 2025-10-12 21:02:04.571593785 +0000 UTC m=+0.096314857 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:04 np0005481680 podman[117594]: 2025-10-12 21:02:04.625661512 +0000 UTC m=+0.150382594 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:02:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:02:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:02:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:04.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:05 np0005481680 python3.9[117764]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:02:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.224687394 +0000 UTC m=+0.075249634 container create e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 17:02:06 np0005481680 systemd[1]: Started libpod-conmon-e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74.scope.
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.195376893 +0000 UTC m=+0.045939173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.323399929 +0000 UTC m=+0.173962249 container init e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.335962857 +0000 UTC m=+0.186525097 container start e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.33921343 +0000 UTC m=+0.189775700 container attach e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:02:06 np0005481680 systemd[1]: libpod-e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74.scope: Deactivated successfully.
Oct 12 17:02:06 np0005481680 dazzling_almeida[117905]: 167 167
Oct 12 17:02:06 np0005481680 conmon[117905]: conmon e6aa3569ce3e1b8fe474 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74.scope/container/memory.events
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.346816302 +0000 UTC m=+0.197378572 container died e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 12 17:02:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fb4da2ab6262069ed0684cd34067e9106b570b60e27606f725f846a97d791426-merged.mount: Deactivated successfully.
Oct 12 17:02:06 np0005481680 podman[117889]: 2025-10-12 21:02:06.409507167 +0000 UTC m=+0.260069447 container remove e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:02:06 np0005481680 systemd[1]: libpod-conmon-e6aa3569ce3e1b8fe474404f0b4ce3b4bcbcee589cc8e65a9e1c9d5ad5ed6c74.scope: Deactivated successfully.
Oct 12 17:02:06 np0005481680 podman[117952]: 2025-10-12 21:02:06.624624886 +0000 UTC m=+0.072008431 container create 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 17:02:06 np0005481680 systemd[1]: Started libpod-conmon-1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57.scope.
Oct 12 17:02:06 np0005481680 podman[117952]: 2025-10-12 21:02:06.595094339 +0000 UTC m=+0.042477944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:06 np0005481680 podman[117952]: 2025-10-12 21:02:06.731911069 +0000 UTC m=+0.179294674 container init 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:02:06 np0005481680 podman[117952]: 2025-10-12 21:02:06.750495539 +0000 UTC m=+0.197879084 container start 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:02:06 np0005481680 podman[117952]: 2025-10-12 21:02:06.754414008 +0000 UTC m=+0.201797603 container attach 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:02:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:06.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:02:07 np0005481680 youthful_pike[117968]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:02:07 np0005481680 youthful_pike[117968]: --> All data devices are unavailable
Oct 12 17:02:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:07 np0005481680 systemd[1]: libpod-1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57.scope: Deactivated successfully.
Oct 12 17:02:07 np0005481680 podman[117952]: 2025-10-12 21:02:07.174457199 +0000 UTC m=+0.621840714 container died 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:02:07 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7fd93fc126a0dc1428760df741f9639e6ae37c5ce520e023693a922dca88bfc3-merged.mount: Deactivated successfully.
Oct 12 17:02:07 np0005481680 podman[117952]: 2025-10-12 21:02:07.241546845 +0000 UTC m=+0.688930390 container remove 1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:02:07 np0005481680 systemd[1]: libpod-conmon-1f7ca9928d9e47e21a35ebdcc03b1d323cbf15edcb4ad99251a67eb462be6b57.scope: Deactivated successfully.
Oct 12 17:02:07 np0005481680 python3.9[118125]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:02:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:07 np0005481680 podman[118257]: 2025-10-12 21:02:07.948726337 +0000 UTC m=+0.062946463 container create 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:02:07 np0005481680 systemd[1]: Started libpod-conmon-9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4.scope.
Oct 12 17:02:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:07.922570966 +0000 UTC m=+0.036791182 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:08.018398589 +0000 UTC m=+0.132618795 container init 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:08.027866848 +0000 UTC m=+0.142086974 container start 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:08.030994737 +0000 UTC m=+0.145214953 container attach 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:02:08 np0005481680 nervous_rosalind[118274]: 167 167
Oct 12 17:02:08 np0005481680 systemd[1]: libpod-9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4.scope: Deactivated successfully.
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:08.033354527 +0000 UTC m=+0.147574663 container died 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 17:02:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-30bacb73f96d880cc84ca398ef12830f16866fbf3bd71294071ef4dfe2bd41e1-merged.mount: Deactivated successfully.
Oct 12 17:02:08 np0005481680 podman[118257]: 2025-10-12 21:02:08.087838845 +0000 UTC m=+0.202059011 container remove 9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:02:08 np0005481680 systemd[1]: libpod-conmon-9822cc785edec8ef8701e89b706fdb857596497489e21ac53907ccbbd79b11d4.scope: Deactivated successfully.
Oct 12 17:02:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.333908786 +0000 UTC m=+0.068670797 container create f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:02:08 np0005481680 systemd[1]: Started libpod-conmon-f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09.scope.
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.30993051 +0000 UTC m=+0.044692581 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:08 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4879c0206535900a42780de417397f90e638e93ab4dcd8634734741ab33d0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4879c0206535900a42780de417397f90e638e93ab4dcd8634734741ab33d0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4879c0206535900a42780de417397f90e638e93ab4dcd8634734741ab33d0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe4879c0206535900a42780de417397f90e638e93ab4dcd8634734741ab33d0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.447772426 +0000 UTC m=+0.182534437 container init f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.462832386 +0000 UTC m=+0.197594377 container start f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.467098684 +0000 UTC m=+0.201860765 container attach f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 12 17:02:08 np0005481680 cranky_kare[118315]: {
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:    "0": [
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:        {
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "devices": [
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "/dev/loop3"
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            ],
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "lv_name": "ceph_lv0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "lv_size": "21470642176",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "name": "ceph_lv0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "tags": {
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.cluster_name": "ceph",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.crush_device_class": "",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.encrypted": "0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.osd_id": "0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.type": "block",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.vdo": "0",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:                "ceph.with_tpm": "0"
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            },
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "type": "block",
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:            "vg_name": "ceph_vg0"
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:        }
Oct 12 17:02:08 np0005481680 cranky_kare[118315]:    ]
Oct 12 17:02:08 np0005481680 cranky_kare[118315]: }
Oct 12 17:02:08 np0005481680 systemd[1]: libpod-f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09.scope: Deactivated successfully.
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.80784571 +0000 UTC m=+0.542607741 container died f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 17:02:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fe4879c0206535900a42780de417397f90e638e93ab4dcd8634734741ab33d0d-merged.mount: Deactivated successfully.
Oct 12 17:02:08 np0005481680 podman[118299]: 2025-10-12 21:02:08.860866651 +0000 UTC m=+0.595628682 container remove f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:02:08 np0005481680 systemd[1]: libpod-conmon-f637b724109a757958b3789434c13fba63efeb42f383d4fa17adc4b17648fd09.scope: Deactivated successfully.
Oct 12 17:02:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:02:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:02:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:08.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.568424232 +0000 UTC m=+0.056798168 container create 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:02:09 np0005481680 systemd[1]: Started libpod-conmon-870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773.scope.
Oct 12 17:02:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.548236121 +0000 UTC m=+0.036610107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.645263014 +0000 UTC m=+0.133636970 container init 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.653318628 +0000 UTC m=+0.141692564 container start 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.656748215 +0000 UTC m=+0.145122161 container attach 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:02:09 np0005481680 affectionate_chaum[118523]: 167 167
Oct 12 17:02:09 np0005481680 systemd[1]: libpod-870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773.scope: Deactivated successfully.
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.660641463 +0000 UTC m=+0.149015439 container died 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:02:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e51e7065d42d613a366a6147f3e3ae1fe1833c3a2a7c3dc273f2ac506a2144ad-merged.mount: Deactivated successfully.
Oct 12 17:02:09 np0005481680 podman[118505]: 2025-10-12 21:02:09.698320786 +0000 UTC m=+0.186694732 container remove 870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaum, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:02:09 np0005481680 systemd[1]: libpod-conmon-870d8ef60cd8a02cdad9de76b8b72936be45db280a1929563b0c3e1e8947a773.scope: Deactivated successfully.
Oct 12 17:02:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:09 np0005481680 podman[118605]: 2025-10-12 21:02:09.88556417 +0000 UTC m=+0.056439058 container create 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:02:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:09 np0005481680 systemd[1]: Started libpod-conmon-8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421.scope.
Oct 12 17:02:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:02:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68570d32e430ceb978c62ba6ef57e1c634c1a42710bd70aae5a40583bd7b5d2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68570d32e430ceb978c62ba6ef57e1c634c1a42710bd70aae5a40583bd7b5d2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68570d32e430ceb978c62ba6ef57e1c634c1a42710bd70aae5a40583bd7b5d2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68570d32e430ceb978c62ba6ef57e1c634c1a42710bd70aae5a40583bd7b5d2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:02:09 np0005481680 podman[118605]: 2025-10-12 21:02:09.869998647 +0000 UTC m=+0.040873555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:02:09 np0005481680 podman[118605]: 2025-10-12 21:02:09.971145535 +0000 UTC m=+0.142020463 container init 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:02:09 np0005481680 podman[118605]: 2025-10-12 21:02:09.978115831 +0000 UTC m=+0.148990759 container start 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:02:09 np0005481680 podman[118605]: 2025-10-12 21:02:09.996123886 +0000 UTC m=+0.166998854 container attach 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:02:10 np0005481680 python3.9[118636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:10 np0005481680 lvm[118837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:02:10 np0005481680 lvm[118837]: VG ceph_vg0 finished
Oct 12 17:02:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:02:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:10.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:02:10 np0005481680 modest_wing[118640]: {}
Oct 12 17:02:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:10 np0005481680 podman[118605]: 2025-10-12 21:02:10.94346402 +0000 UTC m=+1.114338928 container died 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:02:10 np0005481680 systemd[1]: libpod-8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421.scope: Deactivated successfully.
Oct 12 17:02:10 np0005481680 systemd[1]: libpod-8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421.scope: Consumed 1.512s CPU time.
Oct 12 17:02:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-68570d32e430ceb978c62ba6ef57e1c634c1a42710bd70aae5a40583bd7b5d2b-merged.mount: Deactivated successfully.
Oct 12 17:02:10 np0005481680 podman[118605]: 2025-10-12 21:02:10.995016884 +0000 UTC m=+1.165891762 container remove 8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_wing, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:02:11 np0005481680 systemd[1]: libpod-conmon-8622bc72fce11b1291602a3543ee02c1d70645eaf3cc87c0048d4fd2aed71421.scope: Deactivated successfully.
Oct 12 17:02:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:02:11 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:02:11 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:11 np0005481680 python3.9[118870]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:02:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:11 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210211 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:02:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:11 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:11 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:12] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 12 17:02:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:12] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Oct 12 17:02:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:02:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:12 np0005481680 python3.9[119073]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:02:12 np0005481680 python3.9[119151]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:12.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:13 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:13 np0005481680 python3.9[119304]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:02:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:13 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91980030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:13 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:14 np0005481680 python3.9[119408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:02:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:14.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:15 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:15 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:15 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:02:16 np0005481680 python3.9[119563]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:02:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:02:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:02:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:16.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:16.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:02:17 np0005481680 python3.9[119715]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:02:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:17 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:17 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:17 np0005481680 python3.9[119869]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:02:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:17 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:02:18
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'vms', 'backups']
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:02:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:02:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:02:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:02:18 np0005481680 python3.9[120021]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:02:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:18.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:19 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:19 np0005481680 python3.9[120174]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:02:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:19 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:19 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:02:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:20 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:02:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:21 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:21 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:21 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:22] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:02:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:22] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:02:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:02:22 np0005481680 python3.9[120330]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:02:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:22.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:23 np0005481680 python3.9[120484]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:02:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:23 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:23 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:02:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:23 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:02:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:23 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:23 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:24 np0005481680 python3.9[120638]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:02:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:02:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:24.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:25 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:25 np0005481680 python3.9[120791]: ansible-service_facts Invoked
Oct 12 17:02:25 np0005481680 network[120808]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:02:25 np0005481680 network[120809]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:02:25 np0005481680 network[120810]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:02:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:25 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:25 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:02:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:26 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:02:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:02:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:27 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:27 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:27 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:02:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:02:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:02:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:29 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:29 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:29 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:02:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:30.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:31 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:31 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:31 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:32] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:02:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:32] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:02:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:02:32 np0005481680 python3.9[121272]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:02:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:32.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:33 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210233 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:02:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:02:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:02:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:33 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:33 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:02:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:34.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:35 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:35 np0005481680 python3.9[121453]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 12 17:02:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:35 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:35 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:02:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:36.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:36.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:02:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:36.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:02:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:36.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:02:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:37 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:37 np0005481680 python3.9[121607]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:02:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:37 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:37 np0005481680 python3.9[121686]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:37 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9180003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:02:38 np0005481680 python3.9[121838]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:02:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:38.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:39 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:39 np0005481680 python3.9[121917]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:39 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:39 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:02:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:02:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:02:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:40.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:41 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:41 np0005481680 python3.9[122071]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:41 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:41 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:42] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:42] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:02:42 np0005481680 python3.9[122226]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:02:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:42.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:43 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:43 np0005481680 python3.9[122312]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:02:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:02:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:44.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:44.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91a0001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:45 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:46 np0005481680 systemd[1]: session-43.scope: Deactivated successfully.
Oct 12 17:02:46 np0005481680 systemd[1]: session-43.scope: Consumed 27.954s CPU time.
Oct 12 17:02:46 np0005481680 systemd-logind[783]: Session 43 logged out. Waiting for processes to exit.
Oct 12 17:02:46 np0005481680 systemd-logind[783]: Removed session 43.
Oct 12 17:02:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:02:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:46.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:46.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:02:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:46.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:02:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:46.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:02:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:47 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:02:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:02:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:02:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:48.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:48.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:49 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:02:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:50.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:02:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:02:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198001e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:51 np0005481680 systemd-logind[783]: New session 44 of user zuul.
Oct 12 17:02:51 np0005481680 systemd[1]: Started Session 44 of User zuul.
Oct 12 17:02:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:51 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:52] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:02:52] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:02:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:52 np0005481680 python3.9[122503]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:52.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:53 np0005481680 python3.9[122656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:02:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:53 np0005481680 python3.9[122735]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:02:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:53 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:54 np0005481680 systemd[1]: session-44.scope: Deactivated successfully.
Oct 12 17:02:54 np0005481680 systemd[1]: session-44.scope: Consumed 1.938s CPU time.
Oct 12 17:02:54 np0005481680 systemd-logind[783]: Session 44 logged out. Waiting for processes to exit.
Oct 12 17:02:54 np0005481680 systemd-logind[783]: Removed session 44.
Oct 12 17:02:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:54.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:54.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:02:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:55 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:02:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:56.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:56.974Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:02:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:02:56.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:02:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:02:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:56.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:02:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:57 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:02:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:02:58.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:02:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:02:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:02:58.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:02:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:59 np0005481680 systemd-logind[783]: New session 45 of user zuul.
Oct 12 17:02:59 np0005481680 systemd[1]: Started Session 45 of User zuul.
Oct 12 17:02:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:02:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:02:59 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:03:00 np0005481680 python3.9[122944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:03:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:01 np0005481680 python3.9[123102]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:01 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:02] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:03:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:02] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:03:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000021s ======
Oct 12 17:03:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Oct 12 17:03:02 np0005481680 python3.9[123277]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:03:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:03:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:03:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:03:03 np0005481680 python3.9[123356]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.w22w29pu recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:03 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:04 np0005481680 python3.9[123509]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 12 17:03:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:04.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 12 17:03:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:05 np0005481680 python3.9[123588]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.2c3lx_s2 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:05 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9198003450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:03:06 np0005481680 python3.9[123741]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:03:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:06.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:03:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:06.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:07 np0005481680 python3.9[123893]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:07 np0005481680 python3.9[123972]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:03:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:07 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:08 np0005481680 python3.9[124125]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:08 np0005481680 python3.9[124203]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:03:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f91940042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:09 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:10 np0005481680 python3.9[124357]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:03:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210310 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:03:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:10 np0005481680 python3.9[124509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:10.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:11 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9174003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:11 np0005481680 python3.9[124588]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[114064]: 12/10/2025 21:03:11 : epoch 68ec172a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f917c003dd0 fd 38 proxy ignored for local
Oct 12 17:03:11 np0005481680 kernel: ganesha.nfsd[115628]: segfault at 50 ip 00007f9250fba32e sp 00007f92097f9210 error 4 in libntirpc.so.5.8[7f9250f9f000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 12 17:03:11 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:03:11 np0005481680 systemd[1]: Started Process Core Dump (PID 124728/UID 0).
Oct 12 17:03:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:03:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:03:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:03:12 np0005481680 python3.9[124826]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:03:12 np0005481680 python3.9[124964]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:12 np0005481680 podman[124993]: 2025-10-12 21:03:12.930027408 +0000 UTC m=+0.056412483 container create 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:03:12 np0005481680 systemd[91683]: Created slice User Background Tasks Slice.
Oct 12 17:03:12 np0005481680 systemd[91683]: Starting Cleanup of User's Temporary Files and Directories...
Oct 12 17:03:12 np0005481680 systemd[91683]: Finished Cleanup of User's Temporary Files and Directories.
Oct 12 17:03:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:12 np0005481680 systemd[1]: Started libpod-conmon-934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19.scope.
Oct 12 17:03:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:12.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:12 np0005481680 podman[124993]: 2025-10-12 21:03:12.89846094 +0000 UTC m=+0.024846055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:13 np0005481680 systemd-coredump[124733]: Process 114089 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f9250fba32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:03:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:13 np0005481680 podman[124993]: 2025-10-12 21:03:13.038236285 +0000 UTC m=+0.164621400 container init 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:03:13 np0005481680 podman[124993]: 2025-10-12 21:03:13.04901218 +0000 UTC m=+0.175397255 container start 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:03:13 np0005481680 gifted_hertz[125029]: 167 167
Oct 12 17:03:13 np0005481680 podman[124993]: 2025-10-12 21:03:13.05408988 +0000 UTC m=+0.180474995 container attach 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:03:13 np0005481680 systemd[1]: libpod-934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19.scope: Deactivated successfully.
Oct 12 17:03:13 np0005481680 conmon[125029]: conmon 934bb43806ae0e198b18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19.scope/container/memory.events
Oct 12 17:03:13 np0005481680 podman[124993]: 2025-10-12 21:03:13.056553233 +0000 UTC m=+0.182938298 container died 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:03:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e1ba9e7f72043d56e827fb9c11ee932f6e0927eeff5a61a29161710538d160da-merged.mount: Deactivated successfully.
Oct 12 17:03:13 np0005481680 systemd[1]: systemd-coredump@2-124728-0.service: Deactivated successfully.
Oct 12 17:03:13 np0005481680 systemd[1]: systemd-coredump@2-124728-0.service: Consumed 1.168s CPU time.
Oct 12 17:03:13 np0005481680 podman[124993]: 2025-10-12 21:03:13.122695784 +0000 UTC m=+0.249080859 container remove 934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:03:13 np0005481680 systemd[1]: libpod-conmon-934bb43806ae0e198b187fce8fa2353c55df8247202d984ce964667337194e19.scope: Deactivated successfully.
Oct 12 17:03:13 np0005481680 podman[125062]: 2025-10-12 21:03:13.162976184 +0000 UTC m=+0.029851234 container died 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 17:03:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c46121ae449a9102526c3d9095882c8c64be84b8ae00b71a28886ab3c55c5d6e-merged.mount: Deactivated successfully.
Oct 12 17:03:13 np0005481680 podman[125062]: 2025-10-12 21:03:13.20580119 +0000 UTC m=+0.072676230 container remove 4426184008f3d563d9028ed963d92d6c8bf719893f383d07015be0e5a96ca5da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:03:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.369489076 +0000 UTC m=+0.079758171 container create 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:03:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:03:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.750s CPU time.
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.331392421 +0000 UTC m=+0.041661576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:13 np0005481680 systemd[1]: Started libpod-conmon-3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12.scope.
Oct 12 17:03:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.48581402 +0000 UTC m=+0.196083175 container init 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.505248527 +0000 UTC m=+0.215517632 container start 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.510175713 +0000 UTC m=+0.220444808 container attach 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:03:13 np0005481680 objective_shirley[125174]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:03:13 np0005481680 objective_shirley[125174]: --> All data devices are unavailable
Oct 12 17:03:13 np0005481680 systemd[1]: libpod-3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12.scope: Deactivated successfully.
Oct 12 17:03:13 np0005481680 conmon[125174]: conmon 3c912222fa7be72bee3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12.scope/container/memory.events
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.940404554 +0000 UTC m=+0.650673629 container died 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:03:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cdb4dabece2e50d737aa0d32d27c8cb3987686895ff53041dbbfbb51349d305a-merged.mount: Deactivated successfully.
Oct 12 17:03:13 np0005481680 podman[125137]: 2025-10-12 21:03:13.999723321 +0000 UTC m=+0.709992396 container remove 3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:03:14 np0005481680 systemd[1]: libpod-conmon-3c912222fa7be72bee3b1e6d7b70679229a9d46194a51cea1708b5b2adcc5d12.scope: Deactivated successfully.
Oct 12 17:03:14 np0005481680 python3.9[125265]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:03:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:03:14 np0005481680 systemd[1]: Reloading.
Oct 12 17:03:14 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:03:14 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.676084697 +0000 UTC m=+0.059983265 container create d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:03:14 np0005481680 systemd[1]: Started libpod-conmon-d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8.scope.
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.650517373 +0000 UTC m=+0.034416021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.772567174 +0000 UTC m=+0.156465772 container init d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.783275367 +0000 UTC m=+0.167173955 container start d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.78728411 +0000 UTC m=+0.171182708 container attach d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:03:14 np0005481680 objective_lalande[125467]: 167 167
Oct 12 17:03:14 np0005481680 systemd[1]: libpod-d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8.scope: Deactivated successfully.
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.792234397 +0000 UTC m=+0.176132985 container died d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:03:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-99c9ee971530b94c8dfa5bb9cff2aea97ed976cdf1da8ed189e509d374c25a17-merged.mount: Deactivated successfully.
Oct 12 17:03:14 np0005481680 podman[125427]: 2025-10-12 21:03:14.847612232 +0000 UTC m=+0.231510830 container remove d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:03:14 np0005481680 systemd[1]: libpod-conmon-d77f70712534862dab7faf0737dd524e71bd09d6cd8815d23fb9173549dfa1d8.scope: Deactivated successfully.
Oct 12 17:03:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:14.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.088552754 +0000 UTC m=+0.061308238 container create 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:03:15 np0005481680 systemd[1]: Started libpod-conmon-8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e.scope.
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.067181947 +0000 UTC m=+0.039937521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:15 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465193cec3fc5090db2be4a07897ef78d6dd470156096445dde9fbe0de72a6c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465193cec3fc5090db2be4a07897ef78d6dd470156096445dde9fbe0de72a6c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465193cec3fc5090db2be4a07897ef78d6dd470156096445dde9fbe0de72a6c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465193cec3fc5090db2be4a07897ef78d6dd470156096445dde9fbe0de72a6c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.187820152 +0000 UTC m=+0.160575696 container init 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.203507594 +0000 UTC m=+0.176263108 container start 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.208242645 +0000 UTC m=+0.180998159 container attach 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:03:15 np0005481680 python3.9[125643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:15 np0005481680 objective_pare[125610]: {
Oct 12 17:03:15 np0005481680 objective_pare[125610]:    "0": [
Oct 12 17:03:15 np0005481680 objective_pare[125610]:        {
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "devices": [
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "/dev/loop3"
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            ],
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "lv_name": "ceph_lv0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "lv_size": "21470642176",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "name": "ceph_lv0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "tags": {
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.cluster_name": "ceph",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.crush_device_class": "",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.encrypted": "0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.osd_id": "0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.type": "block",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.vdo": "0",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:                "ceph.with_tpm": "0"
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            },
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "type": "block",
Oct 12 17:03:15 np0005481680 objective_pare[125610]:            "vg_name": "ceph_vg0"
Oct 12 17:03:15 np0005481680 objective_pare[125610]:        }
Oct 12 17:03:15 np0005481680 objective_pare[125610]:    ]
Oct 12 17:03:15 np0005481680 objective_pare[125610]: }
Oct 12 17:03:15 np0005481680 systemd[1]: libpod-8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e.scope: Deactivated successfully.
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.584904316 +0000 UTC m=+0.557659830 container died 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:03:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-465193cec3fc5090db2be4a07897ef78d6dd470156096445dde9fbe0de72a6c0-merged.mount: Deactivated successfully.
Oct 12 17:03:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:15 np0005481680 podman[125552]: 2025-10-12 21:03:15.658240411 +0000 UTC m=+0.630995925 container remove 8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_pare, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:03:15 np0005481680 systemd[1]: libpod-conmon-8ae3ef730856a3b8865e5588d6f272e6601cd6ac81230eb191ab8ddd56fdb62e.scope: Deactivated successfully.
Oct 12 17:03:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:16 np0005481680 python3.9[125801]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.433229979 +0000 UTC m=+0.065940357 container create b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:03:16 np0005481680 systemd[1]: Started libpod-conmon-b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6.scope.
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.405801858 +0000 UTC m=+0.038512286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.533566215 +0000 UTC m=+0.166276623 container init b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.544667598 +0000 UTC m=+0.177377986 container start b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:03:16 np0005481680 busy_panini[125871]: 167 167
Oct 12 17:03:16 np0005481680 systemd[1]: libpod-b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6.scope: Deactivated successfully.
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.553681279 +0000 UTC m=+0.186391667 container attach b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.554222483 +0000 UTC m=+0.186932871 container died b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:03:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e0957c29abd10683cc9ce7731ca5269b41775244d652a67ebac56e7ba3f0208f-merged.mount: Deactivated successfully.
Oct 12 17:03:16 np0005481680 podman[125830]: 2025-10-12 21:03:16.606283424 +0000 UTC m=+0.238993802 container remove b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_panini, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:03:16 np0005481680 systemd[1]: libpod-conmon-b46971b18406df27722d4eeaca293b88481e0fb5c37633a019d00b39ffc003a6.scope: Deactivated successfully.
Oct 12 17:03:16 np0005481680 podman[125969]: 2025-10-12 21:03:16.859467028 +0000 UTC m=+0.072280818 container create 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 12 17:03:16 np0005481680 podman[125969]: 2025-10-12 21:03:16.828009304 +0000 UTC m=+0.040823134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:16 np0005481680 systemd[1]: Started libpod-conmon-4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3.scope.
Oct 12 17:03:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:03:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600982eaa04e6640df33e98330ec879974ce85a5e56af3e687a223fdece74ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600982eaa04e6640df33e98330ec879974ce85a5e56af3e687a223fdece74ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600982eaa04e6640df33e98330ec879974ce85a5e56af3e687a223fdece74ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600982eaa04e6640df33e98330ec879974ce85a5e56af3e687a223fdece74ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:16.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:03:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:16.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:03:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:16.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:03:16 np0005481680 podman[125969]: 2025-10-12 21:03:16.98388763 +0000 UTC m=+0.196701400 container init 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:03:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:16.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:17 np0005481680 podman[125969]: 2025-10-12 21:03:17.00072113 +0000 UTC m=+0.213534890 container start 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:03:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:17 np0005481680 podman[125969]: 2025-10-12 21:03:17.008554331 +0000 UTC m=+0.221368111 container attach 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:03:17 np0005481680 python3.9[126042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:17 np0005481680 python3.9[126175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:17 np0005481680 lvm[126198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:03:17 np0005481680 lvm[126198]: VG ceph_vg0 finished
Oct 12 17:03:17 np0005481680 agitated_solomon[126020]: {}
Oct 12 17:03:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210317 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:03:17 np0005481680 systemd[1]: libpod-4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3.scope: Deactivated successfully.
Oct 12 17:03:17 np0005481680 podman[125969]: 2025-10-12 21:03:17.820996236 +0000 UTC m=+1.033809996 container died 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:03:17 np0005481680 systemd[1]: libpod-4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3.scope: Consumed 1.309s CPU time.
Oct 12 17:03:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4600982eaa04e6640df33e98330ec879974ce85a5e56af3e687a223fdece74ca-merged.mount: Deactivated successfully.
Oct 12 17:03:17 np0005481680 podman[125969]: 2025-10-12 21:03:17.866402806 +0000 UTC m=+1.079216556 container remove 4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:03:17 np0005481680 systemd[1]: libpod-conmon-4a8e3eb3238d0b096a046a7e68ce8e4104e96bc66b02abd517f5aeabd20bcfe3.scope: Deactivated successfully.
Oct 12 17:03:17 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:03:17 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:17 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:03:17 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:03:18
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.nfs', 'images', '.rgw.root', 'default.rgw.control', 'volumes']
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:03:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:03:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:03:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:03:18 np0005481680 python3.9[126386]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:03:18 np0005481680 systemd[1]: Reloading.
Oct 12 17:03:18 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:03:18 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:03:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:03:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:18.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:19 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 17:03:19 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 17:03:19 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 17:03:19 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 17:03:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 12 17:03:20 np0005481680 python3.9[126579]: ansible-ansible.builtin.service_facts Invoked
Oct 12 17:03:20 np0005481680 network[126596]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:03:20 np0005481680 network[126597]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:03:20 np0005481680 network[126598]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:03:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:21.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:21.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:22] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:22] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 12 17:03:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:23 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 3.
Oct 12 17:03:23 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:03:23 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.750s CPU time.
Oct 12 17:03:23 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:03:23 np0005481680 podman[126744]: 2025-10-12 21:03:23.820144788 +0000 UTC m=+0.072940206 container create b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:03:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7caf45340daa5e1d3a2c1040fdfcaa23b2afbb8d538ff114f13e52f7c70a2f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7caf45340daa5e1d3a2c1040fdfcaa23b2afbb8d538ff114f13e52f7c70a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7caf45340daa5e1d3a2c1040fdfcaa23b2afbb8d538ff114f13e52f7c70a2f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7caf45340daa5e1d3a2c1040fdfcaa23b2afbb8d538ff114f13e52f7c70a2f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:03:23 np0005481680 podman[126744]: 2025-10-12 21:03:23.789925895 +0000 UTC m=+0.042721373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:03:23 np0005481680 podman[126744]: 2025-10-12 21:03:23.909520493 +0000 UTC m=+0.162315941 container init b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:03:23 np0005481680 podman[126744]: 2025-10-12 21:03:23.918273157 +0000 UTC m=+0.171068585 container start b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:03:23 np0005481680 bash[126744]: b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:03:23 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:03:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:03:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:24 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:03:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 12 17:03:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:03:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:03:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:03:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:26.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:03:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:27.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:28 np0005481680 python3.9[126974]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:03:28 np0005481680 python3.9[127052]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0665d0 =====
Oct 12 17:03:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0665d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:29 np0005481680 radosgw[95273]: beast: 0x7f509b0665d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:29.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:29 np0005481680 python3.9[127206]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:03:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:03:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:03:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:30 np0005481680 python3.9[127358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:31.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:31.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:31 np0005481680 python3.9[127437]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:32] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:32] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:03:32 np0005481680 python3.9[127590]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 12 17:03:32 np0005481680 systemd[1]: Starting Time & Date Service...
Oct 12 17:03:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210332 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:03:32 np0005481680 systemd[1]: Started Time & Date Service.
Oct 12 17:03:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:33.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:33.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:03:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:03:33 np0005481680 python3.9[127748]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:03:34 np0005481680 python3.9[127925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:03:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:35.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:03:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:35 np0005481680 python3.9[128004]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000009:nfs.cephfs.2: -2
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:03:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 12 17:03:36 np0005481680 python3.9[128170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:36 np0005481680 python3.9[128248]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ps4xpudt recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:36.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:03:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:36.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:03:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:37.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:37.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:37 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6bc000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:37 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:37 np0005481680 python3.9[128404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:38 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:03:38 np0005481680 python3.9[128482]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:39.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:03:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:39.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:03:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:39 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:39 np0005481680 python3.9[128635]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:03:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210339 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:03:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:39 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:40 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:03:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:40 np0005481680 python3[128789]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 12 17:03:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:41.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:41.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:41 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:41 np0005481680 python3.9[128942]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:41 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:03:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:42 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:42 np0005481680 python3.9[129021]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:03:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:43.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:43 np0005481680 python3.9[129173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:43 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:43 np0005481680 python3.9[129252]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:43 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:44 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:03:44 np0005481680 python3.9[129405]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:03:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:03:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:45 np0005481680 python3.9[129483]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:45 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:45 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:45 np0005481680 python3.9[129637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:46 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:03:46 np0005481680 python3.9[129715]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:46.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:03:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:46.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:03:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:46.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:03:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:47.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:47 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:47 np0005481680 python3.9[129868]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:03:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:47 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:48 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:48 np0005481680 python3.9[129947]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:03:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:03:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:03:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:03:48 np0005481680 python3.9[130099]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:03:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:49.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:49 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:49 np0005481680 systemd[1]: session-19.scope: Deactivated successfully.
Oct 12 17:03:49 np0005481680 systemd[1]: session-19.scope: Consumed 1min 40.211s CPU time.
Oct 12 17:03:49 np0005481680 systemd-logind[783]: Session 19 logged out. Waiting for processes to exit.
Oct 12 17:03:49 np0005481680 systemd-logind[783]: Removed session 19.
Oct 12 17:03:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:49 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:50 np0005481680 python3.9[130256]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:50 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:50 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:03:50 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:03:50 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:03:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:03:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:50 np0005481680 python3.9[130409]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:51.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:51.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:51 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:51 np0005481680 python3.9[130563]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:03:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:51 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:52] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 12 17:03:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:03:52] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 12 17:03:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:52 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:52 np0005481680 python3.9[130715]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 12 17:03:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:53.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:53.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:53 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:53 np0005481680 python3.9[130868]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 12 17:03:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:53 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a00023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:54 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:54 np0005481680 systemd-logind[783]: Session 45 logged out. Waiting for processes to exit.
Oct 12 17:03:54 np0005481680 systemd[1]: session-45.scope: Deactivated successfully.
Oct 12 17:03:54 np0005481680 systemd[1]: session-45.scope: Consumed 37.646s CPU time.
Oct 12 17:03:54 np0005481680 systemd-logind[783]: Removed session 45.
Oct 12 17:03:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:55 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:03:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:55 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:56 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:56.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:03:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:56.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:03:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:03:56.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:03:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:57.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:57.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:57 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:57 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:58 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:03:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:03:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:03:59.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:03:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:03:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:03:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:03:59.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:03:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:59 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a40016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:03:59 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:03:59 np0005481680 systemd-logind[783]: New session 46 of user zuul.
Oct 12 17:03:59 np0005481680 systemd[1]: Started Session 46 of User zuul.
Oct 12 17:04:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:00 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.664839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040664901, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2007, "num_deletes": 250, "total_data_size": 4345588, "memory_usage": 4418696, "flush_reason": "Manual Compaction"}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040681186, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2653251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10727, "largest_seqno": 12733, "table_properties": {"data_size": 2646666, "index_size": 3464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16255, "raw_average_key_size": 20, "raw_value_size": 2632318, "raw_average_value_size": 3265, "num_data_blocks": 153, "num_entries": 806, "num_filter_entries": 806, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302834, "oldest_key_time": 1760302834, "file_creation_time": 1760303040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 16413 microseconds, and 10403 cpu microseconds.
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.681255) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2653251 bytes OK
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.681282) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.683400) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.683424) EVENT_LOG_v1 {"time_micros": 1760303040683418, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.683449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4337517, prev total WAL file size 4337517, number of live WAL files 2.
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.685345) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2591KB)], [26(12MB)]
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040685466, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 15774999, "oldest_snapshot_seqno": -1}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4413 keys, 14156873 bytes, temperature: kUnknown
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040782014, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14156873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14123068, "index_size": 21663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111108, "raw_average_key_size": 25, "raw_value_size": 14038376, "raw_average_value_size": 3181, "num_data_blocks": 933, "num_entries": 4413, "num_filter_entries": 4413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.782878) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14156873 bytes
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.784483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.5 rd, 145.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.5 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.3) OK, records in: 4837, records dropped: 424 output_compression: NoCompression
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.784554) EVENT_LOG_v1 {"time_micros": 1760303040784536, "job": 10, "event": "compaction_finished", "compaction_time_micros": 97072, "compaction_time_cpu_micros": 58281, "output_level": 6, "num_output_files": 1, "total_output_size": 14156873, "num_input_records": 4837, "num_output_records": 4413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040785656, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303040790659, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.685217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.790750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.790756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.790759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.790761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:00.790763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:00 np0005481680 python3.9[131080]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 12 17:04:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:01.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:01.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:01 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:01 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:01 np0005481680 python3.9[131234]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:04:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:02] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 12 17:04:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:02] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Oct 12 17:04:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:02 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:02 np0005481680 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 12 17:04:02 np0005481680 python3.9[131388]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 12 17:04:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:03.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:03.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:04:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:04:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:03 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:03 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:03 np0005481680 python3.9[131544]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.joifxjmz follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:04 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:04 np0005481680 python3.9[131669]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.joifxjmz mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303043.192037-102-161125802181494/.source.joifxjmz _original_basename=.cl637_x0 follow=False checksum=3befaa4546d49fd1d1f152e2ba3464f2519d4b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:05.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:05 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:05 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:05 np0005481680 python3.9[131823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:04:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:06 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:06 np0005481680 python3.9[131975]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUrMycXYVcL+7zn1LfDS/XAo4B1Q6k5q88AOQO0V1DSy0lEH/22bkAJHbcPijfrl3dhMeGJcwogjwt3PTaeqdOiJexXWbLlsxRvSVJMBvNJX3d2P72MUbflbh5Up3C18L/utF0UCYl6dSVtlMn8JKKaLAe4rlMOU72BTSoS8TVprRknp7VVeB6An8eZLeH0Vk3dXubE2zFgd0xTHQlinEHtdg+yc9M4YYfZ8EV8vU2z9Xsa0aORHhrZRAT8CIFo9CkIbUeF9U9UR5b4sTijzhP9C3f/jgf79E6nl5e9ZzxcuKmDQ8jiLVf9bRqRhbGR+2wueXEfdYVF58M+By6HungbQnlFlaAlAq1BZolYftt6FtG4PtJpO4RILyTPU5Wb+d0orXLr7Y0xldsuHX4yy7Q4d/PlsHUH/qrAga42txPkNPTQE4+HSwcEVkRiZA1fcJsF+FWjsZCEXgMPvo/sTLe/MaxGZuSQIEQPEoSYpCSgqtVRP32knPzV5IXlXE4WYk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGU20IB+SJC12pC7UZenWEz6ArNpBeKEDHazsNAGvY/c#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMx+m4KZ0FVqQWbD//2MFxyUmjEPegKQLve0bwFOx/bTj8jI1C2rNIhPSacPtNi0AR7NLdrRkvdxWrICVRa5jBk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI6e1crx7qVqXhyhbhrFBWUGYOVGZUckPXjjZRcw6shnuNvN01Ag5FgAnDlOUf9AeDbMJJ7asetW38PfKwp3/wUqRxl6jCE2+lIg49G2BTQRMguppb34XFm9BAFcSc1iMhuztA8ACxAYwJ8vjbpMkNgSvJ+U80Mc/lP3PC6jJhms3AEnjV7lLZhIbI+drPqehvFl/aejMY7h+c+8NzUiayfxI/5FuGWvSQCwgfHsxSKBAO1tnopsJGNwhGbHmsPsnqgjjAQ0UooAowO7FedSCJCxrrtUUmiAyNxIOATVNFIfqW7ZXK7wunVDbA3GJS4c73Ti7FSvHVLBg5++l1EqNCtuKjyX2PMYhWt08uObIvKBPSQWGI8aQtipxRnNKLG3ZFXQqT0dS5Mv64Y1OHRdncngiRX/UuWH4HXkWBFxPGcZPhNlMI7d/g7SORIO+Ol/V3Oy0XLP2vNKNb92QKnop/OoXlXQhjYfczYlHyVrmfzyuMoqs/6Cy2PJpm9hp2AW0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKO3ATPH4ob2WViy+ekA59ZjCoRjtCwXOpQhowimFdK8#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJx1ArXrcQNJ8gj4djaajKtJoo4uOxqSz3y53KT0rL9ZZxu+bEQTTKo7s2CbDRC1r+reQ2lNcQ3me495Hz/iRwY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4NhxrmrE6kUiwxXL3Yi9FYiE8LSb9llx9a8aCxpQy0BN/z37HFsaveUs/S6/bMKbYlXctigpmXWiizLmxfgqfWehp6Ae0JBqKA6kmyPWdRMHWbWCWUgBxM15/FjaaUchPj9aRQ97rq/+SxsA65gf965h5bfZaLw9eiZRgOvTrF5uOZqtZeqhhLa6hSuz04Ge7tgfG3ZQ/2w5IghJOraXAnvcFjBaAd2BYOCFm8bVOJa/ktqAhTQjBr1UC+WQT9E5rrAPK2Y5FYF16ZsMZVfS4sOWWtb5WjjsN1CkN2aIQwBEkslq3Mxh0OL8MXo86lWGS4UTuYjJHeTrFdbcJACHFv4t2xyGZ1L6/vQR/Hs6IXt1bg/EHOIR6wZ/fnWKQSI+S2iBCecgtvPeRMJFClfTDB2qjXMbgf2WckK9GQc4YbR5g1F3yFwT73rS0GJXnfQutVqRwoutLLsV99mGde8K09i2ak5vt9f3fytvFMef1/8IF18PW0fHqck/R+M6qPs0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILEEtOxHqqLB9Xl7rlloGLVlf2DYc1jvhr2nh17CvdGv#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7NghpfjXuB3jQjR53XOAR6x9lzT8iIX/Wi1Ye+NTbUBQF+NRqUeXBfYtcFOWUtcq23Rnw/xb2wrN3GnbrB9hk=#012 create=True mode=0644 path=/tmp/ansible.joifxjmz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:06.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:04:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:07.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:07.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:07 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:07 np0005481680 python3.9[132129]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.joifxjmz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:04:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:07 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:08 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:08 np0005481680 python3.9[132283]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.joifxjmz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:04:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:09.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:04:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:09.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:09 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:09 np0005481680 systemd-logind[783]: Session 46 logged out. Waiting for processes to exit.
Oct 12 17:04:09 np0005481680 systemd[1]: session-46.scope: Deactivated successfully.
Oct 12 17:04:09 np0005481680 systemd[1]: session-46.scope: Consumed 6.706s CPU time.
Oct 12 17:04:09 np0005481680 systemd-logind[783]: Removed session 46.
Oct 12 17:04:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:09 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:10 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:04:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:11.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:11.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:11 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b80013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:11 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:12] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:04:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:12] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:04:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:12 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:13.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:13 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.787496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053787526, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 351, "num_deletes": 251, "total_data_size": 219752, "memory_usage": 226856, "flush_reason": "Manual Compaction"}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053790655, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 217868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12734, "largest_seqno": 13084, "table_properties": {"data_size": 215710, "index_size": 322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5193, "raw_average_key_size": 17, "raw_value_size": 211460, "raw_average_value_size": 719, "num_data_blocks": 15, "num_entries": 294, "num_filter_entries": 294, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303041, "oldest_key_time": 1760303041, "file_creation_time": 1760303053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 3193 microseconds, and 1137 cpu microseconds.
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.790690) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 217868 bytes OK
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.790705) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.792493) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.792509) EVENT_LOG_v1 {"time_micros": 1760303053792505, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.792523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 217416, prev total WAL file size 217416, number of live WAL files 2.
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.793275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(212KB)], [29(13MB)]
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053793313, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 14374741, "oldest_snapshot_seqno": -1}
Oct 12 17:04:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:13 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8001eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4197 keys, 12432515 bytes, temperature: kUnknown
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053859472, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12432515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12402028, "index_size": 18921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 107549, "raw_average_key_size": 25, "raw_value_size": 12322910, "raw_average_value_size": 2936, "num_data_blocks": 802, "num_entries": 4197, "num_filter_entries": 4197, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.859713) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12432515 bytes
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.861156) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.0 rd, 187.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(123.0) write-amplify(57.1) OK, records in: 4707, records dropped: 510 output_compression: NoCompression
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.861177) EVENT_LOG_v1 {"time_micros": 1760303053861167, "job": 12, "event": "compaction_finished", "compaction_time_micros": 66232, "compaction_time_cpu_micros": 30376, "output_level": 6, "num_output_files": 1, "total_output_size": 12432515, "num_input_records": 4707, "num_output_records": 4197, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053861333, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303053864721, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.793219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.864799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.864805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.864808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.864811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:13 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:04:13.864814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:04:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:14 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:14 np0005481680 systemd-logind[783]: New session 47 of user zuul.
Oct 12 17:04:14 np0005481680 systemd[1]: Started Session 47 of User zuul.
Oct 12 17:04:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:15.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:15 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:15 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:16 np0005481680 python3.9[132495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:04:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:16 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8001eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:16.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:04:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:16.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:04:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:17.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:17.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:17 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:17 np0005481680 python3.9[132652]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 12 17:04:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:17 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:18 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:04:18
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', '.nfs', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control']
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:04:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:04:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:04:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:04:18 np0005481680 python3.9[132807]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:04:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:19.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:19 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8001eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2752 writes, 13K keys, 2752 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2752 writes, 2752 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2752 writes, 13K keys, 2752 commit groups, 1.0 writes per commit group, ingest: 23.31 MB, 0.04 MB/s#012Interval WAL: 2752 writes, 2752 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.8      0.14              0.05         6    0.023       0      0       0.0       0.0#012  L6      1/0   11.86 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    170.7    149.8      0.39              0.20         5    0.078     20K   2283       0.0       0.0#012 Sum      1/0   11.86 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    126.6    149.0      0.53              0.25        11    0.048     20K   2283       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    130.0    152.9      0.52              0.25        10    0.052     20K   2283       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    170.7    149.8      0.39              0.20         5    0.078     20K   2283       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    162.6      0.12              0.05         5    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562cd3961350#2 capacity: 304.00 MB usage: 2.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(172,2.05 MB,0.675317%) FilterBlock(12,67.67 KB,0.0217388%) IndexBlock(12,132.98 KB,0.0427196%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 17:04:19 np0005481680 python3.9[133093]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.821590069 +0000 UTC m=+0.049221790 container create 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:04:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:19 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:19 np0005481680 systemd[1]: Started libpod-conmon-558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52.scope.
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:19 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:04:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.800729886 +0000 UTC m=+0.028361647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.90844817 +0000 UTC m=+0.136079931 container init 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.916925967 +0000 UTC m=+0.144557708 container start 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:04:19 np0005481680 jovial_kare[133177]: 167 167
Oct 12 17:04:19 np0005481680 systemd[1]: libpod-558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52.scope: Deactivated successfully.
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.923428722 +0000 UTC m=+0.151060483 container attach 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.924202293 +0000 UTC m=+0.151834054 container died 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:04:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-258f94bcfe62c322e0512698bbd57302b7225e9d23d1de216cb2050566235362-merged.mount: Deactivated successfully.
Oct 12 17:04:19 np0005481680 podman[133161]: 2025-10-12 21:04:19.99527846 +0000 UTC m=+0.222910191 container remove 558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 17:04:20 np0005481680 systemd[1]: libpod-conmon-558f5ab3ad3631ef47b3764ec12fd865d64cd2be0761a26598cd1676f68f9d52.scope: Deactivated successfully.
Oct 12 17:04:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:20 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.201035611 +0000 UTC m=+0.048117702 container create 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 17:04:20 np0005481680 systemd[1]: Started libpod-conmon-06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313.scope.
Oct 12 17:04:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.180861585 +0000 UTC m=+0.027943706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.319311065 +0000 UTC m=+0.166393176 container init 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.327965106 +0000 UTC m=+0.175047187 container start 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.331903957 +0000 UTC m=+0.178986088 container attach 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:04:20 np0005481680 xenodochial_poitras[133293]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:04:20 np0005481680 xenodochial_poitras[133293]: --> All data devices are unavailable
Oct 12 17:04:20 np0005481680 python3.9[133350]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:04:20 np0005481680 systemd[1]: libpod-06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313.scope: Deactivated successfully.
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.651512038 +0000 UTC m=+0.498594129 container died 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 12 17:04:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4fffc5e3a5ab3aae040b4802df22c76d15a4356ab35e483a170aad97b3bf6694-merged.mount: Deactivated successfully.
Oct 12 17:04:20 np0005481680 podman[133253]: 2025-10-12 21:04:20.716338336 +0000 UTC m=+0.563420407 container remove 06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_poitras, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 12 17:04:20 np0005481680 systemd[1]: libpod-conmon-06128ff93968a4aa54cc8bfd2f3511cd796672754e8d997aadaf786bd7727313.scope: Deactivated successfully.
Oct 12 17:04:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:21.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:04:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:21.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:04:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:21 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.348145849 +0000 UTC m=+0.065705460 container create e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:04:21 np0005481680 systemd[1]: Started libpod-conmon-e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1.scope.
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.325136531 +0000 UTC m=+0.042696222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.44048255 +0000 UTC m=+0.158042191 container init e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.44635377 +0000 UTC m=+0.163913381 container start e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.450321442 +0000 UTC m=+0.167881093 container attach e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:04:21 np0005481680 frosty_lovelace[133601]: 167 167
Oct 12 17:04:21 np0005481680 systemd[1]: libpod-e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1.scope: Deactivated successfully.
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.45688848 +0000 UTC m=+0.174448091 container died e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:04:21 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1347dab8ddf61525e6dcd21d15121ed38f132db83c8a76b1b51fc5d65df2493d-merged.mount: Deactivated successfully.
Oct 12 17:04:21 np0005481680 podman[133539]: 2025-10-12 21:04:21.498669488 +0000 UTC m=+0.216229089 container remove e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct 12 17:04:21 np0005481680 systemd[1]: libpod-conmon-e88e72ab615f6b03d0ca339c8c37ee695a008c0fd6e368cea779eb003ee7e7a1.scope: Deactivated successfully.
Oct 12 17:04:21 np0005481680 podman[133657]: 2025-10-12 21:04:21.687963228 +0000 UTC m=+0.037510130 container create 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:04:21 np0005481680 systemd[1]: Started libpod-conmon-5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6.scope.
Oct 12 17:04:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:21 np0005481680 python3.9[133650]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c664e8b6ec28ed96fcc1315ac4a2de1fee844701780a9ce9ab7219c7caac59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c664e8b6ec28ed96fcc1315ac4a2de1fee844701780a9ce9ab7219c7caac59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c664e8b6ec28ed96fcc1315ac4a2de1fee844701780a9ce9ab7219c7caac59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c664e8b6ec28ed96fcc1315ac4a2de1fee844701780a9ce9ab7219c7caac59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:21 np0005481680 podman[133657]: 2025-10-12 21:04:21.672518673 +0000 UTC m=+0.022065595 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:21 np0005481680 podman[133657]: 2025-10-12 21:04:21.771944255 +0000 UTC m=+0.121491207 container init 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:04:21 np0005481680 podman[133657]: 2025-10-12 21:04:21.779359894 +0000 UTC m=+0.128906786 container start 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:04:21 np0005481680 podman[133657]: 2025-10-12 21:04:21.782530715 +0000 UTC m=+0.132077627 container attach 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:04:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:21 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:22] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:04:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:22] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]: {
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:    "0": [
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:        {
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "devices": [
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "/dev/loop3"
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            ],
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "lv_name": "ceph_lv0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "lv_size": "21470642176",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "name": "ceph_lv0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "tags": {
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.cluster_name": "ceph",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.crush_device_class": "",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.encrypted": "0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.osd_id": "0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.type": "block",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.vdo": "0",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:                "ceph.with_tpm": "0"
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            },
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "type": "block",
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:            "vg_name": "ceph_vg0"
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:        }
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]:    ]
Oct 12 17:04:22 np0005481680 naughty_vaughan[133674]: }
Oct 12 17:04:22 np0005481680 systemd[1]: libpod-5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6.scope: Deactivated successfully.
Oct 12 17:04:22 np0005481680 podman[133657]: 2025-10-12 21:04:22.067368788 +0000 UTC m=+0.416915700 container died 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:04:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:22 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-07c664e8b6ec28ed96fcc1315ac4a2de1fee844701780a9ce9ab7219c7caac59-merged.mount: Deactivated successfully.
Oct 12 17:04:22 np0005481680 systemd[1]: session-47.scope: Deactivated successfully.
Oct 12 17:04:22 np0005481680 systemd[1]: session-47.scope: Consumed 4.211s CPU time.
Oct 12 17:04:22 np0005481680 systemd-logind[783]: Session 47 logged out. Waiting for processes to exit.
Oct 12 17:04:22 np0005481680 systemd-logind[783]: Removed session 47.
Oct 12 17:04:22 np0005481680 podman[133657]: 2025-10-12 21:04:22.116385951 +0000 UTC m=+0.465932893 container remove 5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:04:22 np0005481680 systemd[1]: libpod-conmon-5190baeb84e70779f44a163bce59bc7925cc0742bfc55ea73d5f4341501993c6.scope: Deactivated successfully.
Oct 12 17:04:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.690402377 +0000 UTC m=+0.042823966 container create 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:04:22 np0005481680 systemd[1]: Started libpod-conmon-0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076.scope.
Oct 12 17:04:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.765188859 +0000 UTC m=+0.117610478 container init 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.673393222 +0000 UTC m=+0.025814851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.77187158 +0000 UTC m=+0.124293169 container start 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.775117163 +0000 UTC m=+0.127538752 container attach 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:04:22 np0005481680 compassionate_elgamal[133826]: 167 167
Oct 12 17:04:22 np0005481680 systemd[1]: libpod-0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076.scope: Deactivated successfully.
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.779676919 +0000 UTC m=+0.132098548 container died 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:04:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9c7cf001034d93d3cecfb89f4b64ef9f6b193ce958e9fd6226184452d0cf20cb-merged.mount: Deactivated successfully.
Oct 12 17:04:22 np0005481680 podman[133809]: 2025-10-12 21:04:22.829195236 +0000 UTC m=+0.181616825 container remove 0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:04:22 np0005481680 systemd[1]: libpod-conmon-0520da94ac5fe8aea29df45b1a54c17b82af02df22a09ccf3f0ade875b086076.scope: Deactivated successfully.
Oct 12 17:04:23 np0005481680 podman[133849]: 2025-10-12 21:04:23.060266124 +0000 UTC m=+0.081538227 container create e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:04:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:23 np0005481680 systemd[1]: Started libpod-conmon-e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff.scope.
Oct 12 17:04:23 np0005481680 podman[133849]: 2025-10-12 21:04:23.029883416 +0000 UTC m=+0.051155559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:04:23 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:04:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8295809c33a446ff7990c98a59d642a8c321d9ecbf24c5da528a0413a8f9e8f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8295809c33a446ff7990c98a59d642a8c321d9ecbf24c5da528a0413a8f9e8f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8295809c33a446ff7990c98a59d642a8c321d9ecbf24c5da528a0413a8f9e8f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:23 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8295809c33a446ff7990c98a59d642a8c321d9ecbf24c5da528a0413a8f9e8f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:04:23 np0005481680 podman[133849]: 2025-10-12 21:04:23.16418496 +0000 UTC m=+0.185457043 container init e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:04:23 np0005481680 podman[133849]: 2025-10-12 21:04:23.178602129 +0000 UTC m=+0.199874242 container start e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:04:23 np0005481680 podman[133849]: 2025-10-12 21:04:23.182524459 +0000 UTC m=+0.203796572 container attach e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 17:04:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:23 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:23 np0005481680 lvm[133941]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:04:23 np0005481680 lvm[133941]: VG ceph_vg0 finished
Oct 12 17:04:24 np0005481680 focused_hoover[133866]: {}
Oct 12 17:04:24 np0005481680 systemd[1]: libpod-e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff.scope: Deactivated successfully.
Oct 12 17:04:24 np0005481680 systemd[1]: libpod-e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff.scope: Consumed 1.447s CPU time.
Oct 12 17:04:24 np0005481680 podman[133849]: 2025-10-12 21:04:24.066657084 +0000 UTC m=+1.087929157 container died e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:04:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:24 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:24 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8295809c33a446ff7990c98a59d642a8c321d9ecbf24c5da528a0413a8f9e8f5-merged.mount: Deactivated successfully.
Oct 12 17:04:24 np0005481680 podman[133849]: 2025-10-12 21:04:24.115678698 +0000 UTC m=+1.136950771 container remove e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hoover, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:04:24 np0005481680 systemd[1]: libpod-conmon-e9fe4f1af9624ea5f40d4eed9361d0817363da9d8a06cab7d16c00eb70f4feff.scope: Deactivated successfully.
Oct 12 17:04:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:04:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:04:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:25.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:04:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:25 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:25 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:26 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:26.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:04:27 np0005481680 systemd-logind[783]: New session 48 of user zuul.
Oct 12 17:04:27 np0005481680 systemd[1]: Started Session 48 of User zuul.
Oct 12 17:04:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:27.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:27 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:27 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b8004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:28 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:28 np0005481680 python3.9[134139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:04:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:29.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:04:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:04:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:29 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:29 np0005481680 python3.9[134297]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:04:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:29 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:30 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:04:30 np0005481680 python3.9[134382]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 12 17:04:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:31.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:31 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:31 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:32] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:04:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:32] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:04:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:32 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:32 np0005481680 python3.9[134535]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:04:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:33.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:04:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:04:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:33 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:33 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:34 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:34 np0005481680 python3.9[134688]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 17:04:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:35.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:35.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:35 np0005481680 python3.9[134856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:04:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:35 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:35 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0001a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:35 np0005481680 python3.9[135015]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:04:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:36 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:36 np0005481680 systemd[1]: session-48.scope: Deactivated successfully.
Oct 12 17:04:36 np0005481680 systemd[1]: session-48.scope: Consumed 6.550s CPU time.
Oct 12 17:04:36 np0005481680 systemd-logind[783]: Session 48 logged out. Waiting for processes to exit.
Oct 12 17:04:36 np0005481680 systemd-logind[783]: Removed session 48.
Oct 12 17:04:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210436 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:04:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:36.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:04:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:37.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:37 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:37 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:38 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:04:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:39.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:39 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:39 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb68c003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:40 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Oct 12 17:04:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:41.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:41 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:41 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:04:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:04:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:42 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:04:42 np0005481680 systemd-logind[783]: New session 49 of user zuul.
Oct 12 17:04:42 np0005481680 systemd[1]: Started Session 49 of User zuul.
Oct 12 17:04:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:04:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:04:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:43.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:43 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:43 np0005481680 python3.9[135201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:04:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:43 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:44 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:04:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:45.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:45.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:45 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:45 np0005481680 python3.9[135359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:45 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:45 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:04:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:46 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:46 np0005481680 python3.9[135512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:04:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:04:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:04:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:47.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:04:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:47 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:47 np0005481680 python3.9[135665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:47 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:48 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:04:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:04:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:04:48 np0005481680 python3.9[135789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303086.4669597-155-11671980194213/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f17066bed696f99fade9fe546b1898fc2bd0a892 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:48 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:04:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:48 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:04:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:49.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:49 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0003480 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:49 np0005481680 python3.9[135942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:49 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:50 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:50 np0005481680 python3.9[136066]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303088.7861178-155-194296960886836/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7954eef363c68e1c6059660b37d6783bbccff64c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:04:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:50 np0005481680 python3.9[136218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:51.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:51 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:51 np0005481680 python3.9[136342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303090.2861662-155-243506585837051/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a2e8150d338cac45c683c1f836f6a2687f3f8836 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:51 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:51 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:04:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:04:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:04:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:04:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:52 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb694002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:04:52 np0005481680 python3.9[136495]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:53 np0005481680 python3.9[136648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:53 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:53 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:54 np0005481680 python3.9[136801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:54 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:04:54 np0005481680 python3.9[136924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303093.4438024-341-98785641676164/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7a3a6e30654157bbaa01ca1e86c36ccddbee8c56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:55.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:55.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:55 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940030c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:04:55 np0005481680 python3.9[137102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:55 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:56 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:04:56 np0005481680 python3.9[137226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303094.9750752-341-84483123655443/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0f542e78857f7369ad2f63615fd5876955c57b9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:04:56.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:04:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:04:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:57.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:04:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:57 np0005481680 python3.9[137378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:04:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:57 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:57 np0005481680 python3.9[137502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303096.6303215-341-26775588531700/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2e2b70985c1c6537d028b1e5b77f121152746a08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:04:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:57 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940030c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:58 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:04:58 np0005481680 python3.9[137655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210458 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:04:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:04:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:59 np0005481680 python3.9[137807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:04:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:04:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:04:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:04:59.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:04:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:59 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:04:59 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a0004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:04:59 np0005481680 python3.9[137961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:00 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6940030c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:05:00 np0005481680 python3.9[138084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303099.412003-511-12906995146455/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3a524f627c03e901caeaa4bd27e205e4ebdd4b20 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:01.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:01.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:01 np0005481680 python3.9[138236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:01 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6a4003280 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:01 np0005481680 python3.9[138362]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303100.7094445-511-146510163190764/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0f542e78857f7369ad2f63615fd5876955c57b9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:01 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:05:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:05:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:02 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:05:02 np0005481680 python3.9[138516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 12 17:05:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:03.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:03 np0005481680 python3.9[138639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303101.979689-511-28658522548541/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=96c57cd8061d41e2273d0895d8d9666d7ba82f51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:03.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:05:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:05:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:03 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:03 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb688000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:04 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:05:04 np0005481680 python3.9[138793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:05.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:05.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:05 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:05 np0005481680 python3.9[138946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:05 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:05 np0005481680 python3.9[139070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303104.8087864-694-170469156500184/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:06 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:06 np0005481680 python3.9[139222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:06.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:05:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:06.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:07.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:07.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:07 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:07 np0005481680 python3.9[139375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:07 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:08 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:08 np0005481680 python3.9[139499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303107.0764043-777-142973576698295/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:09.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:09.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:09 np0005481680 python3.9[139652]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:09 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:09 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:10 np0005481680 python3.9[139805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:10 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:10 np0005481680 python3.9[139928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303109.545079-856-205520917495636/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:11.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:11.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:11 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb680000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:11 np0005481680 python3.9[140081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:11 np0005481680 kernel: ganesha.nfsd[138463]: segfault at 50 ip 00007fb769c4032e sp 00007fb721ffa210 error 4 in libntirpc.so.5.8[7fb769c25000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 12 17:05:11 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:05:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[126763]: 12/10/2025 21:05:11 : epoch 68ec179b : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6b0003da0 fd 39 proxy ignored for local
Oct 12 17:05:11 np0005481680 systemd[1]: Started Process Core Dump (PID 140115/UID 0).
Oct 12 17:05:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:05:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:05:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:05:12 np0005481680 python3.9[140236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:13 np0005481680 systemd-coredump[140129]: Process 126770 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007fb769c4032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:05:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:13 np0005481680 python3.9[140359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303111.9128215-926-83779597259175/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:13 np0005481680 systemd[1]: systemd-coredump@3-140115-0.service: Deactivated successfully.
Oct 12 17:05:13 np0005481680 systemd[1]: systemd-coredump@3-140115-0.service: Consumed 1.143s CPU time.
Oct 12 17:05:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:13.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:13 np0005481680 podman[140369]: 2025-10-12 21:05:13.233330185 +0000 UTC m=+0.035869338 container died b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:05:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-db7caf45340daa5e1d3a2c1040fdfcaa23b2afbb8d538ff114f13e52f7c70a2f-merged.mount: Deactivated successfully.
Oct 12 17:05:13 np0005481680 podman[140369]: 2025-10-12 21:05:13.303302664 +0000 UTC m=+0.105841847 container remove b6177fbf07d292e2daa0ed9cd3b0840c055b3a2f8812e5fd99a7f6532fe392aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:05:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:05:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:05:13 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.854s CPU time.
Oct 12 17:05:14 np0005481680 python3.9[140560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:05:14 np0005481680 python3.9[140712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:15 np0005481680 python3.9[140861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303114.23322-999-268389711688374/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:16 np0005481680 python3.9[141014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:16.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:17.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:17 np0005481680 python3.9[141167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210517 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:05:17 np0005481680 python3.9[141291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303116.6028419-1069-123594156878532/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=48e548d27e8de09ed71741f17725854bc86cbb3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:05:18
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:05:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:05:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:05:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:05:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:19.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:05:20 np0005481680 systemd[1]: session-49.scope: Deactivated successfully.
Oct 12 17:05:20 np0005481680 systemd[1]: session-49.scope: Consumed 27.491s CPU time.
Oct 12 17:05:20 np0005481680 systemd-logind[783]: Session 49 logged out. Waiting for processes to exit.
Oct 12 17:05:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:20 np0005481680 systemd-logind[783]: Removed session 49.
Oct 12 17:05:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:21.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:22] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:05:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:22] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:05:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:23.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:23 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 4.
Oct 12 17:05:23 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:05:23 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.854s CPU time.
Oct 12 17:05:23 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:05:24 np0005481680 podman[141373]: 2025-10-12 21:05:24.146186348 +0000 UTC m=+0.072891185 container create c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:05:24 np0005481680 podman[141373]: 2025-10-12 21:05:24.11421428 +0000 UTC m=+0.040919187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eba0a2a788e6a2204111e064c04caa6d23d733214f8e999d2e7a3eecbb297f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eba0a2a788e6a2204111e064c04caa6d23d733214f8e999d2e7a3eecbb297f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eba0a2a788e6a2204111e064c04caa6d23d733214f8e999d2e7a3eecbb297f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eba0a2a788e6a2204111e064c04caa6d23d733214f8e999d2e7a3eecbb297f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:24 np0005481680 podman[141373]: 2025-10-12 21:05:24.22839905 +0000 UTC m=+0.155103887 container init c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:05:24 np0005481680 podman[141373]: 2025-10-12 21:05:24.247045027 +0000 UTC m=+0.173749864 container start c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:05:24 np0005481680 bash[141373]: c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e
Oct 12 17:05:24 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:05:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:05:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:25 np0005481680 systemd-logind[783]: New session 50 of user zuul.
Oct 12 17:05:25 np0005481680 systemd[1]: Started Session 50 of User zuul.
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:05:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:05:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:26 np0005481680 python3.9[141805]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:26 np0005481680 podman[141832]: 2025-10-12 21:05:26.944557426 +0000 UTC m=+0.071731299 container create 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:05:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:26.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:27 np0005481680 systemd[1]: Started libpod-conmon-68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9.scope.
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:26.915602482 +0000 UTC m=+0.042776395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:27 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:27.068702831 +0000 UTC m=+0.195876754 container init 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:27.081042183 +0000 UTC m=+0.208216026 container start 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:27.084970203 +0000 UTC m=+0.212144156 container attach 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:05:27 np0005481680 infallible_shannon[141871]: 167 167
Oct 12 17:05:27 np0005481680 systemd[1]: libpod-68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9.scope: Deactivated successfully.
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:27.089417516 +0000 UTC m=+0.216591379 container died 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:05:27 np0005481680 systemd[1]: var-lib-containers-storage-overlay-eac1342cdd39872c9a6bda31722f5f05eeb4ee5c2db6b6bc863b3e20ae6f2b53-merged.mount: Deactivated successfully.
Oct 12 17:05:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:05:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.004000098s ======
Oct 12 17:05:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:27.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000098s
Oct 12 17:05:27 np0005481680 podman[141832]: 2025-10-12 21:05:27.152445232 +0000 UTC m=+0.279619105 container remove 68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_shannon, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:05:27 np0005481680 systemd[1]: libpod-conmon-68789b0458353e27ca998f98f0efea0fdf4c12b60af0e6d9ccc2e365b6db81d9.scope: Deactivated successfully.
Oct 12 17:05:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:27 np0005481680 podman[141949]: 2025-10-12 21:05:27.417049646 +0000 UTC m=+0.077803622 container create 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:05:27 np0005481680 podman[141949]: 2025-10-12 21:05:27.383810823 +0000 UTC m=+0.044564849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:27 np0005481680 systemd[1]: Started libpod-conmon-297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0.scope.
Oct 12 17:05:27 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:27 np0005481680 podman[141949]: 2025-10-12 21:05:27.61064385 +0000 UTC m=+0.271397796 container init 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:05:27 np0005481680 podman[141949]: 2025-10-12 21:05:27.626756559 +0000 UTC m=+0.287510495 container start 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:05:27 np0005481680 podman[141949]: 2025-10-12 21:05:27.631235362 +0000 UTC m=+0.291989338 container attach 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:05:27 np0005481680 hardcore_feistel[141970]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:05:27 np0005481680 hardcore_feistel[141970]: --> All data devices are unavailable
Oct 12 17:05:28 np0005481680 python3.9[142048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:28 np0005481680 systemd[1]: libpod-297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0.scope: Deactivated successfully.
Oct 12 17:05:28 np0005481680 podman[141949]: 2025-10-12 21:05:28.036449078 +0000 UTC m=+0.697203034 container died 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 12 17:05:28 np0005481680 systemd[1]: var-lib-containers-storage-overlay-34f77ec855d4837ae90127273824597bba331f3c5d131ac00732d7a942d326ca-merged.mount: Deactivated successfully.
Oct 12 17:05:28 np0005481680 podman[141949]: 2025-10-12 21:05:28.101626469 +0000 UTC m=+0.762380425 container remove 297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_feistel, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:05:28 np0005481680 systemd[1]: libpod-conmon-297b4fdd7507f1212c9d53ff4cc6b8c04c40a5edbb11cd6aa1dbf3f03402b4f0.scope: Deactivated successfully.
Oct 12 17:05:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:28 np0005481680 python3.9[142267]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303127.1968327-62-102800357116031/.source.conf _original_basename=ceph.conf follow=False checksum=a979d858d702e9cda026cda76c8f3f1d01067553 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:28 np0005481680 podman[142285]: 2025-10-12 21:05:28.983911891 +0000 UTC m=+0.084011250 container create 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:05:29 np0005481680 systemd[1]: Started libpod-conmon-97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352.scope.
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:28.945839656 +0000 UTC m=+0.045939025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:29.086976902 +0000 UTC m=+0.187076251 container init 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:29.097734755 +0000 UTC m=+0.197834084 container start 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:29.103324276 +0000 UTC m=+0.203423635 container attach 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:05:29 np0005481680 zealous_gates[142302]: 167 167
Oct 12 17:05:29 np0005481680 systemd[1]: libpod-97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352.scope: Deactivated successfully.
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:29.105337747 +0000 UTC m=+0.205437066 container died 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:05:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d545cef6823d7926ad10e4f20ce04ce1052b079069d222ff6a11bd23efe31a24-merged.mount: Deactivated successfully.
Oct 12 17:05:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:29.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:29 np0005481680 podman[142285]: 2025-10-12 21:05:29.16586878 +0000 UTC m=+0.265968109 container remove 97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:05:29 np0005481680 systemd[1]: libpod-conmon-97856a44e9b7a5c16020c0fa6ae7ae9027f003c231c95c9131ecd01f9ce6d352.scope: Deactivated successfully.
Oct 12 17:05:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.320821776 +0000 UTC m=+0.049383862 container create 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:05:29 np0005481680 systemd[1]: Started libpod-conmon-392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f.scope.
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.298034759 +0000 UTC m=+0.026596815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4f8547d2a1f504eb9153787d23c872e6b9f946cd8c9d1b1d8357a69333a2c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4f8547d2a1f504eb9153787d23c872e6b9f946cd8c9d1b1d8357a69333a2c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4f8547d2a1f504eb9153787d23c872e6b9f946cd8c9d1b1d8357a69333a2c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4f8547d2a1f504eb9153787d23c872e6b9f946cd8c9d1b1d8357a69333a2c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.42044712 +0000 UTC m=+0.149009196 container init 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.432512566 +0000 UTC m=+0.161074612 container start 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.436342452 +0000 UTC m=+0.164904608 container attach 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]: {
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:    "0": [
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:        {
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "devices": [
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "/dev/loop3"
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            ],
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "lv_name": "ceph_lv0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "lv_size": "21470642176",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "name": "ceph_lv0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "tags": {
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.cluster_name": "ceph",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.crush_device_class": "",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.encrypted": "0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.osd_id": "0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.type": "block",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.vdo": "0",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:                "ceph.with_tpm": "0"
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            },
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "type": "block",
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:            "vg_name": "ceph_vg0"
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:        }
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]:    ]
Oct 12 17:05:29 np0005481680 bold_mcnulty[142443]: }
Oct 12 17:05:29 np0005481680 systemd[1]: libpod-392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f.scope: Deactivated successfully.
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.733758468 +0000 UTC m=+0.462320504 container died 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:05:29 np0005481680 python3.9[142500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3f4f8547d2a1f504eb9153787d23c872e6b9f946cd8c9d1b1d8357a69333a2c9-merged.mount: Deactivated successfully.
Oct 12 17:05:29 np0005481680 podman[142403]: 2025-10-12 21:05:29.790027503 +0000 UTC m=+0.518589559 container remove 392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mcnulty, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:05:29 np0005481680 systemd[1]: libpod-conmon-392bbf272303daf0f468e62dc14888a9e33dc92c4155109e6a8f71b1514f415f.scope: Deactivated successfully.
Oct 12 17:05:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:05:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:30 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:05:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:30 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:05:30 np0005481680 python3.9[142695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303129.1862078-62-40737965119897/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=66d743d6767b50dcfc22a4999c89f03e91ed32ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.508369101 +0000 UTC m=+0.066201568 container create cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 17:05:30 np0005481680 systemd[1]: Started libpod-conmon-cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b.scope.
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.480168087 +0000 UTC m=+0.038000634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.599352986 +0000 UTC m=+0.157185523 container init cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.606983239 +0000 UTC m=+0.164815716 container start cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.610707234 +0000 UTC m=+0.168539741 container attach cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:05:30 np0005481680 optimistic_kare[142776]: 167 167
Oct 12 17:05:30 np0005481680 systemd[1]: libpod-cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b.scope: Deactivated successfully.
Oct 12 17:05:30 np0005481680 conmon[142776]: conmon cb011d12118189e18704 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b.scope/container/memory.events
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.614541031 +0000 UTC m=+0.172373528 container died cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:05:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-da5fe447a9ba3fb7dd19957a0bb998f43286bd3d2ea4168af3e196d6859e14bf-merged.mount: Deactivated successfully.
Oct 12 17:05:30 np0005481680 podman[142736]: 2025-10-12 21:05:30.662578328 +0000 UTC m=+0.220410835 container remove cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 17:05:30 np0005481680 systemd[1]: libpod-conmon-cb011d12118189e1870408a18b3fc9eaaafb539ce671e8bf61ef2ea95f7dc07b.scope: Deactivated successfully.
Oct 12 17:05:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:30 np0005481680 podman[142800]: 2025-10-12 21:05:30.892969475 +0000 UTC m=+0.079671710 container create 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:05:30 np0005481680 systemd[1]: Started libpod-conmon-4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb.scope.
Oct 12 17:05:30 np0005481680 podman[142800]: 2025-10-12 21:05:30.852849518 +0000 UTC m=+0.039551813 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:05:30 np0005481680 systemd[1]: session-50.scope: Deactivated successfully.
Oct 12 17:05:30 np0005481680 systemd[1]: session-50.scope: Consumed 3.468s CPU time.
Oct 12 17:05:30 np0005481680 systemd-logind[783]: Session 50 logged out. Waiting for processes to exit.
Oct 12 17:05:30 np0005481680 systemd-logind[783]: Removed session 50.
Oct 12 17:05:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:05:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d524fd4501abfba0d26e80a07961b7833645ebeafb476cd55db150fee2a60c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d524fd4501abfba0d26e80a07961b7833645ebeafb476cd55db150fee2a60c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d524fd4501abfba0d26e80a07961b7833645ebeafb476cd55db150fee2a60c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d524fd4501abfba0d26e80a07961b7833645ebeafb476cd55db150fee2a60c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:05:31 np0005481680 podman[142800]: 2025-10-12 21:05:31.002603352 +0000 UTC m=+0.189305627 container init 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 17:05:31 np0005481680 podman[142800]: 2025-10-12 21:05:31.01674046 +0000 UTC m=+0.203442695 container start 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:05:31 np0005481680 podman[142800]: 2025-10-12 21:05:31.02068661 +0000 UTC m=+0.207388905 container attach 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:05:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:31.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:31 np0005481680 lvm[142892]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:05:31 np0005481680 lvm[142892]: VG ceph_vg0 finished
Oct 12 17:05:31 np0005481680 competent_wiles[142816]: {}
Oct 12 17:05:31 np0005481680 systemd[1]: libpod-4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb.scope: Deactivated successfully.
Oct 12 17:05:31 np0005481680 podman[142800]: 2025-10-12 21:05:31.842802648 +0000 UTC m=+1.029504863 container died 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:05:31 np0005481680 systemd[1]: libpod-4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb.scope: Consumed 1.467s CPU time.
Oct 12 17:05:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d2d524fd4501abfba0d26e80a07961b7833645ebeafb476cd55db150fee2a60c-merged.mount: Deactivated successfully.
Oct 12 17:05:31 np0005481680 podman[142800]: 2025-10-12 21:05:31.898789416 +0000 UTC m=+1.085491621 container remove 4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wiles, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:05:31 np0005481680 systemd[1]: libpod-conmon-4eb8f6db7d3039eabb414e55f6ebd8a3623d04e541a0f510ef81d014b7c07edb.scope: Deactivated successfully.
Oct 12 17:05:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:05:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:05:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:32] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:05:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:32] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:05:32 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:32 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:05:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:05:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:05:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:05:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:05:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:35.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:35 np0005481680 systemd-logind[783]: New session 51 of user zuul.
Oct 12 17:05:35 np0005481680 systemd[1]: Started Session 51 of User zuul.
Oct 12 17:05:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:05:36 np0005481680 python3.9[143126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:36.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:05:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:37.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:37 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01a0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:37 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:38 np0005481680 python3.9[143287]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:38 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:05:38 np0005481680 python3.9[143439]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:05:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:39 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:39 np0005481680 python3.9[143591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:05:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210539 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:05:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:39 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:40 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:05:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:40 np0005481680 python3.9[143744]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 12 17:05:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:41 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:41 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01980025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:42] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:05:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:42] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:05:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:42 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:05:42 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 12 17:05:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:43 np0005481680 python3.9[143902]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:05:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:43 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:43 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:44 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:44 np0005481680 python3.9[143988]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:05:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:05:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:45.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:45 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:45 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:46 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:46 np0005481680 python3.9[144143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:05:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:05:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:46.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:47 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198002ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:47 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:48 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:05:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:05:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:05:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210548 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:05:49 np0005481680 python3[144300]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 12 17:05:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:49.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:49 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:49 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198003bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:50 np0005481680 python3.9[144454]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:50 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:05:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:51 np0005481680 python3.9[144606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:51.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:51.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:51 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210551 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:05:51 np0005481680 python3.9[144685]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:51 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:52] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:05:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:05:52] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:05:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:52 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198003bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:05:52 np0005481680 python3.9[144838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:05:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7035 writes, 29K keys, 7035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7035 writes, 1264 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7035 writes, 29K keys, 7035 commit groups, 1.0 writes per commit group, ingest: 20.30 MB, 0.03 MB/s#012Interval WAL: 7035 writes, 1264 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 12 17:05:53 np0005481680 python3.9[144916]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1fzocvt4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:53.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:05:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:53.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:05:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:53 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:53 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:54 np0005481680 python3.9[145070]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:54 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:05:54 np0005481680 python3.9[145148]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:55.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:55.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:55 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:05:55 np0005481680 python3.9[145326]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:05:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:55 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:56 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:05:56 np0005481680 python3[145480]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 12 17:05:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:05:56.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:05:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:57.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:57 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:05:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:05:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:05:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:57 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:57 np0005481680 python3.9[145633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:57 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:58 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:05:58 np0005481680 python3.9[145759]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303157.0875704-431-278974352128329/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:05:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:05:59.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:05:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:05:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:05:59.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:05:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:59 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:05:59 np0005481680 python3.9[145912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:05:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:05:59 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:00 np0005481680 python3.9[146038]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303158.7860615-476-102517232828551/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:00 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:00 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:06:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:00 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:06:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Oct 12 17:06:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:00 np0005481680 python3.9[146190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:01 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:01 np0005481680 python3.9[146316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303160.3896005-521-192742887388228/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:01 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:02] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:02] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:02 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 12 17:06:02 np0005481680 python3.9[146469]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:03 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:06:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:06:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:06:03 np0005481680 python3.9[146595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303162.0509381-566-268870884155876/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:03 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:03 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:04 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:06:04 np0005481680 python3.9[146748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:05 np0005481680 python3.9[146873]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303163.6441088-611-111155109096737/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:05.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:05 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0198004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:05 np0005481680 python3.9[147027]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:05 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:06 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:06 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:06:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:06:06 np0005481680 python3.9[147179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:07.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:07.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:07 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0170000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:07 np0005481680 python3.9[147335]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:07 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:08 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:06:08 np0005481680 python3.9[147488]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210608 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:06:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:09.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:09.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:09 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:06:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:09 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:06:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:09 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:09 np0005481680 python3.9[147642]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:06:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:09 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01700016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:10 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 12 17:06:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:11.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:11 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:11 np0005481680 python3.9[147798]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:11 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:12] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:06:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:12] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Oct 12 17:06:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:12 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:12 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:06:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 12 17:06:12 np0005481680 python3.9[147955]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:13.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:13 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:13 np0005481680 python3.9[148106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:06:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:13 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:14 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct 12 17:06:15 np0005481680 python3.9[148260]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:15 np0005481680 ovs-vsctl[148262]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 12 17:06:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:15.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:15.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:15 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210615 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:06:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:15 np0005481680 python3.9[148440]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:15 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:16 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:06:16 np0005481680 python3.9[148597]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:16 np0005481680 ovs-vsctl[148598]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 12 17:06:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:17.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:17.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:17 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:17 np0005481680 python3.9[148749]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:06:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:17 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:06:18
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.nfs']
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:06:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:18 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c001e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:06:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:06:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:06:18 np0005481680 python3.9[148904]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:19.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:19.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:19 np0005481680 python3.9[149057]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:19 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:19 np0005481680 python3.9[149136]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:19 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:20 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:06:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:20 np0005481680 python3.9[149288]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:21.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:21 np0005481680 python3.9[149367]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:21 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c001e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:21 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0194002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:22] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:22] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:22 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 255 B/s wr, 1 op/s
Oct 12 17:06:22 np0005481680 python3.9[149520]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:23.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:23 np0005481680 python3.9[149673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:23.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:23 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:23 np0005481680 python3.9[149752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:23 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:24 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c002b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 12 17:06:24 np0005481680 python3.9[149904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:25.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:25 np0005481680 python3.9[149983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:25 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:25 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:26 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:26 np0005481680 python3.9[150136]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:06:26 np0005481680 systemd[1]: Reloading.
Oct 12 17:06:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:26 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:06:26 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:06:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:27.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:06:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:27.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:27.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:27 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c002b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:27 np0005481680 python3.9[150326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:27 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:28 np0005481680 python3.9[150405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:28 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:29 np0005481680 python3.9[150557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:29 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:29 np0005481680 python3.9[150636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:29 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c002b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:30 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:30 np0005481680 python3.9[150789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:06:30 np0005481680 systemd[1]: Reloading.
Oct 12 17:06:30 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:06:30 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:06:31 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 17:06:31 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 17:06:31 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 17:06:31 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 17:06:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:31 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180003750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:31 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:32] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:32] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Oct 12 17:06:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:32 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:32 np0005481680 python3.9[150984]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:06:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:33.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:33 np0005481680 python3.9[151208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:06:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:33 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:06:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:33 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:06:33 np0005481680 python3.9[151344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303192.6186647-1364-57429047442699/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:33 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:06:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:34 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.666418246 +0000 UTC m=+0.070025715 container create bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Oct 12 17:06:34 np0005481680 systemd[1]: Started libpod-conmon-bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283.scope.
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.634522078 +0000 UTC m=+0.038129587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:34 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.791539616 +0000 UTC m=+0.195147155 container init bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.806183226 +0000 UTC m=+0.209790695 container start bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.811787608 +0000 UTC m=+0.215395067 container attach bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 17:06:34 np0005481680 distracted_wescoff[151584]: 167 167
Oct 12 17:06:34 np0005481680 systemd[1]: libpod-bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283.scope: Deactivated successfully.
Oct 12 17:06:34 np0005481680 conmon[151584]: conmon bd53fe80f58b40d26f1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283.scope/container/memory.events
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.81697099 +0000 UTC m=+0.220578479 container died bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 17:06:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0e6247bf95f19d3d107dfdef66e19d956cec3ef1c865e0859b1c79667076595e-merged.mount: Deactivated successfully.
Oct 12 17:06:34 np0005481680 podman[151534]: 2025-10-12 21:06:34.879753191 +0000 UTC m=+0.283360650 container remove bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 12 17:06:34 np0005481680 systemd[1]: libpod-conmon-bd53fe80f58b40d26f1d6ef3c198a87c334151d71ef50b7cc75aa2ae3e891283.scope: Deactivated successfully.
Oct 12 17:06:35 np0005481680 python3.9[151607]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.135727716 +0000 UTC m=+0.058555085 container create f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:06:35 np0005481680 systemd[1]: Started libpod-conmon-f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab.scope.
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.115036301 +0000 UTC m=+0.037863650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:35.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:35 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:35 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.258998939 +0000 UTC m=+0.181826368 container init f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.267101954 +0000 UTC m=+0.189929323 container start f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.271620488 +0000 UTC m=+0.194447907 container attach f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:06:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:35.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:35 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:35 np0005481680 dazzling_carver[151670]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:06:35 np0005481680 dazzling_carver[151670]: --> All data devices are unavailable
Oct 12 17:06:35 np0005481680 systemd[1]: libpod-f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab.scope: Deactivated successfully.
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.705348747 +0000 UTC m=+0.628176086 container died f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:06:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b1cba53278ee16df329d3ba8bcdf6df965db9cd85c90a21d118f0c4e2ccfc8c8-merged.mount: Deactivated successfully.
Oct 12 17:06:35 np0005481680 podman[151629]: 2025-10-12 21:06:35.753049705 +0000 UTC m=+0.675877034 container remove f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_carver, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:06:35 np0005481680 systemd[1]: libpod-conmon-f41b8ab4b3cea172511bf07770b85d97a9518d0a0c605eb9a8c76305a3686cab.scope: Deactivated successfully.
Oct 12 17:06:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:35 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:35 np0005481680 python3.9[151852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:06:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:36 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.494774625 +0000 UTC m=+0.073773929 container create b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.448716799 +0000 UTC m=+0.027716163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:36 np0005481680 systemd[1]: Started libpod-conmon-b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19.scope.
Oct 12 17:06:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.605743937 +0000 UTC m=+0.184743301 container init b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.616847738 +0000 UTC m=+0.195847052 container start b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.62090603 +0000 UTC m=+0.199905404 container attach b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:06:36 np0005481680 vibrant_mccarthy[152083]: 167 167
Oct 12 17:06:36 np0005481680 systemd[1]: libpod-b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19.scope: Deactivated successfully.
Oct 12 17:06:36 np0005481680 conmon[152083]: conmon b19fa038a939752cb0d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19.scope/container/memory.events
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.62402983 +0000 UTC m=+0.203029114 container died b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:06:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-de197ee74de5d8178964f68f27b9b06bfb65fa22403b81670f1c5a24f9313f3b-merged.mount: Deactivated successfully.
Oct 12 17:06:36 np0005481680 podman[152039]: 2025-10-12 21:06:36.660467253 +0000 UTC m=+0.239466537 container remove b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:06:36 np0005481680 systemd[1]: libpod-conmon-b19fa038a939752cb0d0f8d2375539a82d31d503eeebb1d847ede45cab0d4e19.scope: Deactivated successfully.
Oct 12 17:06:36 np0005481680 python3.9[152079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303195.367441-1439-72992797622859/.source.json _original_basename=.f6rmmwkp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:36 np0005481680 podman[152130]: 2025-10-12 21:06:36.877922862 +0000 UTC m=+0.046139510 container create cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:06:36 np0005481680 systemd[1]: Started libpod-conmon-cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f.scope.
Oct 12 17:06:36 np0005481680 podman[152130]: 2025-10-12 21:06:36.859583098 +0000 UTC m=+0.027799736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68b27f6c053177c524d8d75c4f36163aec8f3a9c8ae90481a61e15c72a294b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68b27f6c053177c524d8d75c4f36163aec8f3a9c8ae90481a61e15c72a294b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68b27f6c053177c524d8d75c4f36163aec8f3a9c8ae90481a61e15c72a294b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68b27f6c053177c524d8d75c4f36163aec8f3a9c8ae90481a61e15c72a294b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:36 np0005481680 podman[152130]: 2025-10-12 21:06:36.993760147 +0000 UTC m=+0.161976845 container init cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:06:37 np0005481680 podman[152130]: 2025-10-12 21:06:37.005397221 +0000 UTC m=+0.173613859 container start cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:06:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:37.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:37 np0005481680 podman[152130]: 2025-10-12 21:06:37.014647456 +0000 UTC m=+0.182864104 container attach cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 17:06:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:37.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:37 np0005481680 elated_jones[152147]: {
Oct 12 17:06:37 np0005481680 elated_jones[152147]:    "0": [
Oct 12 17:06:37 np0005481680 elated_jones[152147]:        {
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "devices": [
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "/dev/loop3"
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            ],
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "lv_name": "ceph_lv0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "lv_size": "21470642176",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "name": "ceph_lv0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "tags": {
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.cluster_name": "ceph",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.crush_device_class": "",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.encrypted": "0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.osd_id": "0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.type": "block",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.vdo": "0",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:                "ceph.with_tpm": "0"
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            },
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "type": "block",
Oct 12 17:06:37 np0005481680 elated_jones[152147]:            "vg_name": "ceph_vg0"
Oct 12 17:06:37 np0005481680 elated_jones[152147]:        }
Oct 12 17:06:37 np0005481680 elated_jones[152147]:    ]
Oct 12 17:06:37 np0005481680 elated_jones[152147]: }
Oct 12 17:06:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:37.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:37 np0005481680 systemd[1]: libpod-cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f.scope: Deactivated successfully.
Oct 12 17:06:37 np0005481680 podman[152130]: 2025-10-12 21:06:37.364555841 +0000 UTC m=+0.532772479 container died cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 17:06:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6b68b27f6c053177c524d8d75c4f36163aec8f3a9c8ae90481a61e15c72a294b-merged.mount: Deactivated successfully.
Oct 12 17:06:37 np0005481680 podman[152130]: 2025-10-12 21:06:37.42770031 +0000 UTC m=+0.595916968 container remove cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:06:37 np0005481680 systemd[1]: libpod-conmon-cb7bf3765ac9703e62518969eee91666ac6cef0f5a92c274edc3cc291485131f.scope: Deactivated successfully.
Oct 12 17:06:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:37 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:37 np0005481680 python3.9[152294]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:37 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:38 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.228366675 +0000 UTC m=+0.066855845 container create b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 17:06:38 np0005481680 systemd[1]: Started libpod-conmon-b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6.scope.
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.20885745 +0000 UTC m=+0.047346640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.32729157 +0000 UTC m=+0.165780760 container init b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.338522615 +0000 UTC m=+0.177011775 container start b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.342443864 +0000 UTC m=+0.180933064 container attach b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 17:06:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:38 np0005481680 romantic_bhaskara[152558]: 167 167
Oct 12 17:06:38 np0005481680 systemd[1]: libpod-b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6.scope: Deactivated successfully.
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.351590866 +0000 UTC m=+0.190080046 container died b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:06:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-82c8b34eb15d03ad14cc4651e9a905a76efc6a2c87e8bbaaca99dbefad5ed605-merged.mount: Deactivated successfully.
Oct 12 17:06:38 np0005481680 podman[152497]: 2025-10-12 21:06:38.398500244 +0000 UTC m=+0.236989434 container remove b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 17:06:38 np0005481680 systemd[1]: libpod-conmon-b04a5ef7c1564cdf7d207b6b7a02216d81645d9002f5ba1f93fc504f705117f6.scope: Deactivated successfully.
Oct 12 17:06:38 np0005481680 podman[152586]: 2025-10-12 21:06:38.613679226 +0000 UTC m=+0.068667231 container create 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 17:06:38 np0005481680 podman[152586]: 2025-10-12 21:06:38.584528067 +0000 UTC m=+0.039516112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:38 np0005481680 systemd[1]: Started libpod-conmon-63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b.scope.
Oct 12 17:06:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220bcac1a7b077aaa92a86494f573530e70ccdea41b06ffd576940d974df500b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220bcac1a7b077aaa92a86494f573530e70ccdea41b06ffd576940d974df500b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220bcac1a7b077aaa92a86494f573530e70ccdea41b06ffd576940d974df500b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220bcac1a7b077aaa92a86494f573530e70ccdea41b06ffd576940d974df500b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:38 np0005481680 podman[152586]: 2025-10-12 21:06:38.748713276 +0000 UTC m=+0.203701281 container init 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:06:38 np0005481680 podman[152586]: 2025-10-12 21:06:38.764027034 +0000 UTC m=+0.219015009 container start 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:06:38 np0005481680 podman[152586]: 2025-10-12 21:06:38.768042506 +0000 UTC m=+0.223030511 container attach 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:06:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:39.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:39 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:39 np0005481680 lvm[152824]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:06:39 np0005481680 lvm[152824]: VG ceph_vg0 finished
Oct 12 17:06:39 np0005481680 tender_jones[152634]: {}
Oct 12 17:06:39 np0005481680 systemd[1]: libpod-63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b.scope: Deactivated successfully.
Oct 12 17:06:39 np0005481680 podman[152586]: 2025-10-12 21:06:39.525696571 +0000 UTC m=+0.980684546 container died 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 17:06:39 np0005481680 systemd[1]: libpod-63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b.scope: Consumed 1.281s CPU time.
Oct 12 17:06:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-220bcac1a7b077aaa92a86494f573530e70ccdea41b06ffd576940d974df500b-merged.mount: Deactivated successfully.
Oct 12 17:06:39 np0005481680 podman[152586]: 2025-10-12 21:06:39.571912861 +0000 UTC m=+1.026900826 container remove 63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:06:39 np0005481680 systemd[1]: libpod-conmon-63dafc1886f9aae365ae897a68cf75646f052384d361b7343b7ce14d09e05f8b.scope: Deactivated successfully.
Oct 12 17:06:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:06:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:06:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:39 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:40 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:40 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:06:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:40 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f017c004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:40 np0005481680 python3.9[152993]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 12 17:06:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:41.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:41 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:41 np0005481680 python3.9[153147]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:06:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:41 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0180004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:06:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:06:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:06:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[141388]: 12/10/2025 21:06:42 : epoch 68ec1814 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0174003c30 fd 39 proxy ignored for local
Oct 12 17:06:42 np0005481680 kernel: ganesha.nfsd[143592]: segfault at 50 ip 00007f024c6fe32e sp 00007f0208ff8210 error 4 in libntirpc.so.5.8[7f024c6e3000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 12 17:06:42 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:06:42 np0005481680 systemd[1]: Started Process Core Dump (PID 153209/UID 0).
Oct 12 17:06:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:06:42 np0005481680 python3.9[153301]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 12 17:06:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:43 np0005481680 systemd-coredump[153221]: Process 141392 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f024c6fe32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:06:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210643 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:06:43 np0005481680 systemd[1]: systemd-coredump@4-153209-0.service: Deactivated successfully.
Oct 12 17:06:43 np0005481680 systemd[1]: systemd-coredump@4-153209-0.service: Consumed 1.204s CPU time.
Oct 12 17:06:43 np0005481680 podman[153355]: 2025-10-12 21:06:43.624424328 +0000 UTC m=+0.039468412 container died c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:06:43 np0005481680 systemd[1]: var-lib-containers-storage-overlay-36eba0a2a788e6a2204111e064c04caa6d23d733214f8e999d2e7a3eecbb297f-merged.mount: Deactivated successfully.
Oct 12 17:06:43 np0005481680 podman[153355]: 2025-10-12 21:06:43.658319276 +0000 UTC m=+0.073363350 container remove c96e3435b01587a666bac26dd12509e7712f00718f5877d05bd0ecdf6993457e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 12 17:06:43 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:06:43 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:06:43 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.879s CPU time.
Oct 12 17:06:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:06:45 np0005481680 python3[153528]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:06:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:45.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 12 17:06:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:47.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:47.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210647 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:06:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:06:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:06:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:06:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:49.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 12 17:06:50 np0005481680 podman[153542]: 2025-10-12 21:06:50.489402414 +0000 UTC m=+5.423478079 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 12 17:06:50 np0005481680 podman[153667]: 2025-10-12 21:06:50.633124774 +0000 UTC m=+0.048727695 container create f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 12 17:06:50 np0005481680 podman[153667]: 2025-10-12 21:06:50.606291625 +0000 UTC m=+0.021894586 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 12 17:06:50 np0005481680 python3[153528]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 12 17:06:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:51.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:51 np0005481680 python3.9[153859]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:06:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:52] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 12 17:06:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:06:52] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 12 17:06:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 12 17:06:53 np0005481680 python3.9[154013]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:53 np0005481680 python3.9[154090]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:06:54 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 5.
Oct 12 17:06:54 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:06:54 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.879s CPU time.
Oct 12 17:06:54 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:06:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:06:54 np0005481680 podman[154287]: 2025-10-12 21:06:54.400527797 +0000 UTC m=+0.083622800 container create 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:06:54 np0005481680 podman[154287]: 2025-10-12 21:06:54.367613643 +0000 UTC m=+0.050708646 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:06:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c4cc1977cf9ade1f902a89110783d0d662bd7fd3e1494e849cbf545dcc3c87/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c4cc1977cf9ade1f902a89110783d0d662bd7fd3e1494e849cbf545dcc3c87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c4cc1977cf9ade1f902a89110783d0d662bd7fd3e1494e849cbf545dcc3c87/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c4cc1977cf9ade1f902a89110783d0d662bd7fd3e1494e849cbf545dcc3c87/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:54 np0005481680 podman[154287]: 2025-10-12 21:06:54.489727227 +0000 UTC m=+0.172822290 container init 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:06:54 np0005481680 podman[154287]: 2025-10-12 21:06:54.499768201 +0000 UTC m=+0.182863204 container start 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:06:54 np0005481680 bash[154287]: 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:06:54 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:06:54 np0005481680 python3.9[154299]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760303213.6877677-1703-37616010721657/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:06:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:06:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:06:55 np0005481680 python3.9[154422]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:06:55 np0005481680 systemd[1]: Reloading.
Oct 12 17:06:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:55 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:06:55 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:06:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:06:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:55.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:06:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:06:56 np0005481680 python3.9[154562]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:06:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:06:56 np0005481680 systemd[1]: Reloading.
Oct 12 17:06:56 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:06:56 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:06:56 np0005481680 systemd[1]: Starting ovn_controller container...
Oct 12 17:06:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:06:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bb16492c0a1d2909efd4559406f14e590925df742bd2b343d83df2904e9ad/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 12 17:06:56 np0005481680 systemd[1]: Started /usr/bin/podman healthcheck run f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11.
Oct 12 17:06:56 np0005481680 podman[154602]: 2025-10-12 21:06:56.901448165 +0000 UTC m=+0.190527238 container init f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:06:56 np0005481680 ovn_controller[154617]: + sudo -E kolla_set_configs
Oct 12 17:06:56 np0005481680 podman[154602]: 2025-10-12 21:06:56.941259744 +0000 UTC m=+0.230338807 container start f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 12 17:06:56 np0005481680 edpm-start-podman-container[154602]: ovn_controller
Oct 12 17:06:56 np0005481680 systemd[1]: Created slice User Slice of UID 0.
Oct 12 17:06:56 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 12 17:06:57 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 12 17:06:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:57.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:06:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:06:57.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:06:57 np0005481680 systemd[1]: Starting User Manager for UID 0...
Oct 12 17:06:57 np0005481680 edpm-start-podman-container[154601]: Creating additional drop-in dependency for "ovn_controller" (f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11)
Oct 12 17:06:57 np0005481680 podman[154624]: 2025-10-12 21:06:57.068926328 +0000 UTC m=+0.111970437 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 12 17:06:57 np0005481680 systemd[1]: f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11-7eb80983edaa2154.service: Main process exited, code=exited, status=1/FAILURE
Oct 12 17:06:57 np0005481680 systemd[1]: f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11-7eb80983edaa2154.service: Failed with result 'exit-code'.
Oct 12 17:06:57 np0005481680 systemd[1]: Reloading.
Oct 12 17:06:57 np0005481680 systemd[154649]: Queued start job for default target Main User Target.
Oct 12 17:06:57 np0005481680 systemd[154649]: Created slice User Application Slice.
Oct 12 17:06:57 np0005481680 systemd[154649]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 12 17:06:57 np0005481680 systemd[154649]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 17:06:57 np0005481680 systemd[154649]: Reached target Paths.
Oct 12 17:06:57 np0005481680 systemd[154649]: Reached target Timers.
Oct 12 17:06:57 np0005481680 systemd[154649]: Starting D-Bus User Message Bus Socket...
Oct 12 17:06:57 np0005481680 systemd[154649]: Starting Create User's Volatile Files and Directories...
Oct 12 17:06:57 np0005481680 systemd[154649]: Finished Create User's Volatile Files and Directories.
Oct 12 17:06:57 np0005481680 systemd[154649]: Listening on D-Bus User Message Bus Socket.
Oct 12 17:06:57 np0005481680 systemd[154649]: Reached target Sockets.
Oct 12 17:06:57 np0005481680 systemd[154649]: Reached target Basic System.
Oct 12 17:06:57 np0005481680 systemd[154649]: Reached target Main User Target.
Oct 12 17:06:57 np0005481680 systemd[154649]: Startup finished in 166ms.
Oct 12 17:06:57 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:06:57 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:06:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:57.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:57.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:57 np0005481680 systemd[1]: Started User Manager for UID 0.
Oct 12 17:06:57 np0005481680 systemd[1]: Started ovn_controller container.
Oct 12 17:06:57 np0005481680 systemd[1]: Started Session c1 of User root.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: INFO:__main__:Validating config file
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: INFO:__main__:Writing out command to execute
Oct 12 17:06:57 np0005481680 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: ++ cat /run_command
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + ARGS=
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + sudo kolla_copy_cacerts
Oct 12 17:06:57 np0005481680 systemd[1]: Started Session c2 of User root.
Oct 12 17:06:57 np0005481680 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + [[ ! -n '' ]]
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + . kolla_extend_start
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + umask 0022
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6677] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6689] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6708] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6718] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6725] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 12 17:06:57 np0005481680 kernel: br-int: entered promiscuous mode
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00017|main|INFO|OVS feature set changed, force recompute.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.6988] manager: (ovn-78fa6b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 12 17:06:57 np0005481680 ovn_controller[154617]: 2025-10-12T21:06:57Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.7001] manager: (ovn-cafd13-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 12 17:06:57 np0005481680 kernel: genev_sys_6081: entered promiscuous mode
Oct 12 17:06:57 np0005481680 systemd-udevd[154751]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:06:57 np0005481680 systemd-udevd[154755]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.7326] device (genev_sys_6081): carrier: link connected
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.7331] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct 12 17:06:57 np0005481680 NetworkManager[44859]: <info>  [1760303217.8532] manager: (ovn-e9dadf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 12 17:06:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Oct 12 17:06:58 np0005481680 python3.9[154885]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:58 np0005481680 ovs-vsctl[154886]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 12 17:06:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:06:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:06:59.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:06:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:06:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:06:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:06:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:06:59 np0005481680 python3.9[155039]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:06:59 np0005481680 ovs-vsctl[155041]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 12 17:07:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:07:00 np0005481680 python3.9[155195]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:07:00 np0005481680 ovs-vsctl[155196]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 12 17:07:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 12 17:07:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 12 17:07:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:07:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:07:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:07:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:01 np0005481680 systemd[1]: session-51.scope: Deactivated successfully.
Oct 12 17:07:01 np0005481680 systemd[1]: session-51.scope: Consumed 1min 7.604s CPU time.
Oct 12 17:07:01 np0005481680 systemd-logind[783]: Session 51 logged out. Waiting for processes to exit.
Oct 12 17:07:01 np0005481680 systemd-logind[783]: Removed session 51.
Oct 12 17:07:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:01.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:02] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 12 17:07:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:02] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Oct 12 17:07:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:07:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:07:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:07:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:07:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Oct 12 17:07:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:05.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:05.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210705 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:07:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct 12 17:07:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:07.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:07:07 np0005481680 systemd-logind[783]: New session 53 of user zuul.
Oct 12 17:07:07 np0005481680 systemd[1]: Started Session 53 of User zuul.
Oct 12 17:07:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:07.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:07 np0005481680 systemd[1]: Stopping User Manager for UID 0...
Oct 12 17:07:07 np0005481680 systemd[154649]: Activating special unit Exit the Session...
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped target Main User Target.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped target Basic System.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped target Paths.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped target Sockets.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped target Timers.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 12 17:07:07 np0005481680 systemd[154649]: Closed D-Bus User Message Bus Socket.
Oct 12 17:07:07 np0005481680 systemd[154649]: Stopped Create User's Volatile Files and Directories.
Oct 12 17:07:07 np0005481680 systemd[154649]: Removed slice User Application Slice.
Oct 12 17:07:07 np0005481680 systemd[154649]: Reached target Shutdown.
Oct 12 17:07:07 np0005481680 systemd[154649]: Finished Exit the Session.
Oct 12 17:07:07 np0005481680 systemd[154649]: Reached target Exit the Session.
Oct 12 17:07:07 np0005481680 systemd[1]: user@0.service: Deactivated successfully.
Oct 12 17:07:07 np0005481680 systemd[1]: Stopped User Manager for UID 0.
Oct 12 17:07:07 np0005481680 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 12 17:07:07 np0005481680 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 12 17:07:07 np0005481680 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 12 17:07:07 np0005481680 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 12 17:07:07 np0005481680 systemd[1]: Removed slice User Slice of UID 0.
Oct 12 17:07:08 np0005481680 python3.9[155383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:07:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct 12 17:07:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000f:nfs.cephfs.2: -2
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:07:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7320000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:09 np0005481680 python3.9[155540]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:10 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 12 17:07:10 np0005481680 python3.9[155708]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:11 np0005481680 python3.9[155861]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:07:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:11.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:07:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:11 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210711 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:07:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:11 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:07:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:12] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Oct 12 17:07:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:12 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:12 np0005481680 python3.9[156014]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 12 17:07:13 np0005481680 python3.9[156166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:13 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:13 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:14 np0005481680 python3.9[156318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:07:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:14 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Oct 12 17:07:15 np0005481680 python3.9[156470]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 12 17:07:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:15.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:15 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:15 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc002400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:16 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 12 17:07:16 np0005481680 python3.9[156648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:17.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:07:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:17.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:17 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:17 np0005481680 python3.9[156770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303236.0910985-218-81838472244858/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:17 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:07:18
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', '.nfs', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'vms', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:07:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:18 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc002400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:07:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:07:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:07:18 np0005481680 python3.9[156921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:19.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:19 np0005481680 python3.9[157043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303237.8391235-263-98718799643503/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:19.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:19 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:19 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:20 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 12 17:07:20 np0005481680 python3.9[157196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:07:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:21.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:21.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:21 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc002400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:21 np0005481680 python3.9[157281]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:07:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:21 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:22] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:22] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:22 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:07:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210722 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:07:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:23.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:23 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:23 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:24 np0005481680 python3.9[157437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:07:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:24 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:07:25 np0005481680 python3.9[157590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:25.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:25.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:25 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:25 np0005481680 python3.9[157713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303244.5724428-374-111762373318019/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:25 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:26 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:07:26 np0005481680 python3.9[157863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:27.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:07:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:27 np0005481680 python3.9[157985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303246.0089424-374-222462440815/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:27 np0005481680 ovn_controller[154617]: 2025-10-12T21:07:27Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Oct 12 17:07:27 np0005481680 ovn_controller[154617]: 2025-10-12T21:07:27Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 12 17:07:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:27 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:27 np0005481680 podman[157986]: 2025-10-12 21:07:27.504590021 +0000 UTC m=+0.156838594 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:07:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:27 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:28 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:07:28 np0005481680 python3.9[158162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:29.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:29.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:29 np0005481680 python3.9[158284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303248.411982-506-250724728520520/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:30 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:07:30 np0005481680 python3.9[158435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:07:31 np0005481680 python3.9[158556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303249.9465353-506-176005766261478/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:31.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:31.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:32] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:32] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:32 np0005481680 python3.9[158708]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:07:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:07:33 np0005481680 python3.9[158862]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:33.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:07:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:07:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:33.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:33 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:33 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:34 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:07:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:34 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:07:34 np0005481680 python3.9[159016]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:34 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:07:34 np0005481680 python3.9[159094]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:35.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:07:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:07:35 np0005481680 python3.9[159247]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:35 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:35 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:36 np0005481680 python3.9[159326]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:36 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:07:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:37.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:07:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:37.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:07:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:37 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:07:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:37.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:37 np0005481680 python3.9[159503]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:37 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:37 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:38 np0005481680 python3.9[159657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:38 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:07:38 np0005481680 python3.9[159735]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.005000124s ======
Oct 12 17:07:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000124s
Oct 12 17:07:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:39 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:39 np0005481680 python3.9[159889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:39 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:40 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:07:40 np0005481680 python3.9[160015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:07:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:41.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:41 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:41 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:07:41 np0005481680 python3.9[160253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:07:41 np0005481680 systemd[1]: Reloading.
Oct 12 17:07:41 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:07:41 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:07:41 np0005481680 podman[160297]: 2025-10-12 21:07:41.823009544 +0000 UTC m=+0.083935035 container create 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:07:41 np0005481680 podman[160297]: 2025-10-12 21:07:41.793813362 +0000 UTC m=+0.054738863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:41 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:07:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:42] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Oct 12 17:07:42 np0005481680 systemd[1]: Started libpod-conmon-781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a.scope.
Oct 12 17:07:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:42 np0005481680 podman[160297]: 2025-10-12 21:07:42.133305033 +0000 UTC m=+0.394230524 container init 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:07:42 np0005481680 podman[160297]: 2025-10-12 21:07:42.147900395 +0000 UTC m=+0.408825876 container start 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:07:42 np0005481680 podman[160297]: 2025-10-12 21:07:42.153361664 +0000 UTC m=+0.414287155 container attach 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:07:42 np0005481680 charming_gates[160347]: 167 167
Oct 12 17:07:42 np0005481680 systemd[1]: libpod-781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a.scope: Deactivated successfully.
Oct 12 17:07:42 np0005481680 podman[160297]: 2025-10-12 21:07:42.157420108 +0000 UTC m=+0.418345589 container died 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:07:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e448d5f580fcf88a87f90936aca76297f1e7e28cb788925d90613a2a50c679fc-merged.mount: Deactivated successfully.
Oct 12 17:07:42 np0005481680 podman[160297]: 2025-10-12 21:07:42.216087828 +0000 UTC m=+0.477013319 container remove 781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_gates, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:07:42 np0005481680 systemd[1]: libpod-conmon-781af58e4799bb35b6b624145f641b5c8e99f9de4612963c199525d65ddbd64a.scope: Deactivated successfully.
Oct 12 17:07:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:42 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 937 B/s wr, 2 op/s
Oct 12 17:07:42 np0005481680 podman[160410]: 2025-10-12 21:07:42.47804708 +0000 UTC m=+0.084092320 container create 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:07:42 np0005481680 podman[160410]: 2025-10-12 21:07:42.443384069 +0000 UTC m=+0.049429359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:42 np0005481680 systemd[1]: Started libpod-conmon-7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28.scope.
Oct 12 17:07:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:42 np0005481680 podman[160410]: 2025-10-12 21:07:42.603212972 +0000 UTC m=+0.209258232 container init 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:07:42 np0005481680 podman[160410]: 2025-10-12 21:07:42.624700408 +0000 UTC m=+0.230745648 container start 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:07:42 np0005481680 podman[160410]: 2025-10-12 21:07:42.628958027 +0000 UTC m=+0.235003267 container attach 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:07:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210742 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:07:43 np0005481680 python3.9[160545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:43 np0005481680 naughty_tesla[160464]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:07:43 np0005481680 naughty_tesla[160464]: --> All data devices are unavailable
Oct 12 17:07:43 np0005481680 systemd[1]: libpod-7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28.scope: Deactivated successfully.
Oct 12 17:07:43 np0005481680 podman[160410]: 2025-10-12 21:07:43.136996994 +0000 UTC m=+0.743042214 container died 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:07:43 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5a8ccb02ba82d1e7ffc2a6e0bbcd47057da1b91b2ba4a0161e6e7933015def74-merged.mount: Deactivated successfully.
Oct 12 17:07:43 np0005481680 podman[160410]: 2025-10-12 21:07:43.209238732 +0000 UTC m=+0.815283942 container remove 7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_tesla, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:07:43 np0005481680 systemd[1]: libpod-conmon-7d5f729c0db5f0043e9d2397b26778059443a4aed555b878bea74d997a649e28.scope: Deactivated successfully.
Oct 12 17:07:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.003000074s ======
Oct 12 17:07:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Oct 12 17:07:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:43.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:43 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:43 np0005481680 python3.9[160693]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:43 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.035550284 +0000 UTC m=+0.071042288 container create e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:07:44 np0005481680 systemd[1]: Started libpod-conmon-e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112.scope.
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:43.998899862 +0000 UTC m=+0.034391906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.153785319 +0000 UTC m=+0.189277373 container init e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.164708817 +0000 UTC m=+0.200200821 container start e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.17187534 +0000 UTC m=+0.207367394 container attach e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:07:44 np0005481680 great_boyd[160831]: 167 167
Oct 12 17:07:44 np0005481680 systemd[1]: libpod-e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112.scope: Deactivated successfully.
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.176880087 +0000 UTC m=+0.212372091 container died e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:07:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-33bd8c81eea046addd4f5dcade3359ebf036493205c84f739b220f62fedfb674-merged.mount: Deactivated successfully.
Oct 12 17:07:44 np0005481680 podman[160775]: 2025-10-12 21:07:44.234717487 +0000 UTC m=+0.270209461 container remove e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:07:44 np0005481680 systemd[1]: libpod-conmon-e904b98e74d403b1df36e1729ddb3e633c886d53aa55dfb44888c8ad2f863112.scope: Deactivated successfully.
Oct 12 17:07:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:44 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 937 B/s wr, 3 op/s
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.452611448 +0000 UTC m=+0.079799930 container create 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.418442489 +0000 UTC m=+0.045631021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:44 np0005481680 systemd[1]: Started libpod-conmon-3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e.scope.
Oct 12 17:07:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49acd240810f642a5f659070cfa902bb6b325d2b0ebd28eaf826eb5d6b6eb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49acd240810f642a5f659070cfa902bb6b325d2b0ebd28eaf826eb5d6b6eb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49acd240810f642a5f659070cfa902bb6b325d2b0ebd28eaf826eb5d6b6eb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49acd240810f642a5f659070cfa902bb6b325d2b0ebd28eaf826eb5d6b6eb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.594814773 +0000 UTC m=+0.222003315 container init 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.607773164 +0000 UTC m=+0.234961656 container start 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.612304249 +0000 UTC m=+0.239492731 container attach 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:07:44 np0005481680 python3.9[160940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:44 np0005481680 loving_ride[160946]: {
Oct 12 17:07:44 np0005481680 loving_ride[160946]:    "0": [
Oct 12 17:07:44 np0005481680 loving_ride[160946]:        {
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "devices": [
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "/dev/loop3"
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            ],
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "lv_name": "ceph_lv0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "lv_size": "21470642176",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "name": "ceph_lv0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "tags": {
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.cluster_name": "ceph",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.crush_device_class": "",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.encrypted": "0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.osd_id": "0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.type": "block",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.vdo": "0",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:                "ceph.with_tpm": "0"
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            },
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "type": "block",
Oct 12 17:07:44 np0005481680 loving_ride[160946]:            "vg_name": "ceph_vg0"
Oct 12 17:07:44 np0005481680 loving_ride[160946]:        }
Oct 12 17:07:44 np0005481680 loving_ride[160946]:    ]
Oct 12 17:07:44 np0005481680 loving_ride[160946]: }
Oct 12 17:07:44 np0005481680 systemd[1]: libpod-3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e.scope: Deactivated successfully.
Oct 12 17:07:44 np0005481680 conmon[160946]: conmon 3bee85b98ef96c1d277e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e.scope/container/memory.events
Oct 12 17:07:44 np0005481680 podman[160917]: 2025-10-12 21:07:44.980360577 +0000 UTC m=+0.607549069 container died 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 17:07:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a49acd240810f642a5f659070cfa902bb6b325d2b0ebd28eaf826eb5d6b6eb29-merged.mount: Deactivated successfully.
Oct 12 17:07:45 np0005481680 podman[160917]: 2025-10-12 21:07:45.039278575 +0000 UTC m=+0.666467057 container remove 3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ride, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:07:45 np0005481680 systemd[1]: libpod-conmon-3bee85b98ef96c1d277e8e7638ed68bf6ef390ce8f37d01466b7930dfab3fd4e.scope: Deactivated successfully.
Oct 12 17:07:45 np0005481680 python3.9[161045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:45.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:45 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.832022451 +0000 UTC m=+0.064318427 container create 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:07:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:45 np0005481680 systemd[1]: Started libpod-conmon-78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f.scope.
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.810525405 +0000 UTC m=+0.042821361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:45 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.956516997 +0000 UTC m=+0.188813023 container init 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:07:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:45 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.966191583 +0000 UTC m=+0.198487529 container start 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.969612269 +0000 UTC m=+0.201908245 container attach 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 17:07:45 np0005481680 laughing_taussig[161277]: 167 167
Oct 12 17:07:45 np0005481680 systemd[1]: libpod-78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f.scope: Deactivated successfully.
Oct 12 17:07:45 np0005481680 podman[161235]: 2025-10-12 21:07:45.975276733 +0000 UTC m=+0.207572709 container died 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:07:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8450c1b400135c3402de8000f6a0a9f868e93329b536548dcdbd7960b7128d4b-merged.mount: Deactivated successfully.
Oct 12 17:07:46 np0005481680 podman[161235]: 2025-10-12 21:07:46.033593936 +0000 UTC m=+0.265889882 container remove 78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:07:46 np0005481680 systemd[1]: libpod-conmon-78e199756e50f820bc6c7d729349f7c1d74a9997ef5337c4b64e81a00b603a6f.scope: Deactivated successfully.
Oct 12 17:07:46 np0005481680 python3.9[161312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:07:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:46 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:46 np0005481680 podman[161332]: 2025-10-12 21:07:46.302462343 +0000 UTC m=+0.088250515 container create 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:07:46 np0005481680 systemd[1]: Reloading.
Oct 12 17:07:46 np0005481680 podman[161332]: 2025-10-12 21:07:46.267658017 +0000 UTC m=+0.053446229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:07:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:07:46 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:07:46 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:07:46 np0005481680 systemd[1]: Started libpod-conmon-19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310.scope.
Oct 12 17:07:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:07:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfba7cdbc6b814290adb3b817fb2b6b15c892345810ad9b3f42e174b175bb95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfba7cdbc6b814290adb3b817fb2b6b15c892345810ad9b3f42e174b175bb95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfba7cdbc6b814290adb3b817fb2b6b15c892345810ad9b3f42e174b175bb95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfba7cdbc6b814290adb3b817fb2b6b15c892345810ad9b3f42e174b175bb95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:07:46 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 17:07:46 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 17:07:46 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 17:07:46 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 17:07:46 np0005481680 podman[161332]: 2025-10-12 21:07:46.737555646 +0000 UTC m=+0.523343838 container init 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:07:46 np0005481680 podman[161332]: 2025-10-12 21:07:46.751577452 +0000 UTC m=+0.537365614 container start 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:07:46 np0005481680 podman[161332]: 2025-10-12 21:07:46.757342528 +0000 UTC m=+0.543130720 container attach 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:07:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:47.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:07:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:47.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:47 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:47 np0005481680 lvm[161619]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:07:47 np0005481680 lvm[161619]: VG ceph_vg0 finished
Oct 12 17:07:47 np0005481680 musing_mendel[161385]: {}
Oct 12 17:07:47 np0005481680 systemd[1]: libpod-19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310.scope: Deactivated successfully.
Oct 12 17:07:47 np0005481680 conmon[161385]: conmon 19b5e4dee2fc931355e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310.scope/container/memory.events
Oct 12 17:07:47 np0005481680 systemd[1]: libpod-19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310.scope: Consumed 1.703s CPU time.
Oct 12 17:07:47 np0005481680 podman[161332]: 2025-10-12 21:07:47.745147124 +0000 UTC m=+1.530935296 container died 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:07:47 np0005481680 python3.9[161612]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7cfba7cdbc6b814290adb3b817fb2b6b15c892345810ad9b3f42e174b175bb95-merged.mount: Deactivated successfully.
Oct 12 17:07:47 np0005481680 podman[161332]: 2025-10-12 21:07:47.806511104 +0000 UTC m=+1.592299246 container remove 19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mendel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:07:47 np0005481680 systemd[1]: libpod-conmon-19b5e4dee2fc931355e61e1da4f90f8d5c41de5b1a65f7250138351c5e2c6310.scope: Deactivated successfully.
Oct 12 17:07:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:07:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:07:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:47 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:07:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:07:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:48 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:07:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:07:48 np0005481680 python3.9[161809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:07:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:49.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:49.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:49 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:49 np0005481680 python3.9[161933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303268.0290318-959-227008580389348/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:49 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:50 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:07:50 np0005481680 python3.9[162086]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:07:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:51 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:51 np0005481680 python3.9[162239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:07:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:51 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:07:52] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:07:52 np0005481680 python3.9[162363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303270.9477217-1034-58114715990626/.source.json _original_basename=.gpdkx6se follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:52 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:07:53 np0005481680 python3.9[162515]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:07:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:53.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:53 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:53 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:07:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:55.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:55.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:55 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:07:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:55 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:56 np0005481680 python3.9[162946]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 12 17:07:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:56 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:07:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:07:57.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:07:57 np0005481680 python3.9[163123]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:07:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:57.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:57.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:57 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:57 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:58 np0005481680 podman[163249]: 2025-10-12 21:07:58.174046864 +0000 UTC m=+0.161087966 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 12 17:07:58 np0005481680 python3.9[163290]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 12 17:07:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:58 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:07:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:07:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:07:59.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:07:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:07:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:07:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:07:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:07:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:59 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:07:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:07:59 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:08:00 np0005481680 python3[163486]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:08:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:01.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:01 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:01 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:08:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:02] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Oct 12 17:08:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:02 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:08:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:08:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:04 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 12 17:08:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct 12 17:08:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:05.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:05 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:05 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:06 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:07.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:08:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:07.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:07 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:07 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:08 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:09.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:09 np0005481680 podman[163500]: 2025-10-12 21:08:09.946595265 +0000 UTC m=+9.446976481 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:08:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:10 np0005481680 podman[163631]: 2025-10-12 21:08:10.12220802 +0000 UTC m=+0.063028134 container create 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:10 np0005481680 podman[163631]: 2025-10-12 21:08:10.086869952 +0000 UTC m=+0.027690106 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:08:10 np0005481680 python3[163486]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:08:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:10 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:11 np0005481680 python3.9[163822]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:08:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:11 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:11 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:11 np0005481680 python3.9[163978]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:12] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:08:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:12] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Oct 12 17:08:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:12 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:12 np0005481680 python3.9[164054]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:08:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:13 np0005481680 python3.9[164206]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760303292.645386-1298-112750678625004/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:13 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:13 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:14 np0005481680 python3.9[164283]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:08:14 np0005481680 systemd[1]: Reloading.
Oct 12 17:08:14 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:08:14 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:08:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:14 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c002da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:15 np0005481680 python3.9[164394]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:15 np0005481680 systemd[1]: Reloading.
Oct 12 17:08:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:08:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:08:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:08:15 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:08:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:15 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:15 np0005481680 systemd[1]: Starting ovn_metadata_agent container...
Oct 12 17:08:15 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180a755de2ac8f6fc8f543c71f7236d0f8dfd9b07a765c0ed5404954c9db684c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180a755de2ac8f6fc8f543c71f7236d0f8dfd9b07a765c0ed5404954c9db684c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:15 np0005481680 systemd[1]: Started /usr/bin/podman healthcheck run 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c.
Oct 12 17:08:15 np0005481680 podman[164439]: 2025-10-12 21:08:15.81163033 +0000 UTC m=+0.191094510 container init 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:08:15 np0005481680 ovn_metadata_agent[164454]: + sudo -E kolla_set_configs
Oct 12 17:08:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:15 np0005481680 podman[164439]: 2025-10-12 21:08:15.853792822 +0000 UTC m=+0.233256942 container start 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:08:15 np0005481680 edpm-start-podman-container[164439]: ovn_metadata_agent
Oct 12 17:08:15 np0005481680 podman[164460]: 2025-10-12 21:08:15.971103555 +0000 UTC m=+0.102191389 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:08:15 np0005481680 edpm-start-podman-container[164438]: Creating additional drop-in dependency for "ovn_metadata_agent" (930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c)
Oct 12 17:08:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:15 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:16 np0005481680 systemd[1]: Reloading.
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Validating config file
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Copying service configuration files
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Writing out command to execute
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: ++ cat /run_command
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + CMD=neutron-ovn-metadata-agent
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + ARGS=
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + sudo kolla_copy_cacerts
Oct 12 17:08:16 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:08:16 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + [[ ! -n '' ]]
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + . kolla_extend_start
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + umask 0022
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: + exec neutron-ovn-metadata-agent
Oct 12 17:08:16 np0005481680 ovn_metadata_agent[164454]: Running command: 'neutron-ovn-metadata-agent'
Oct 12 17:08:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:16 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:16 np0005481680 systemd[1]: Started ovn_metadata_agent container.
Oct 12 17:08:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:17.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:08:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:17 np0005481680 systemd[1]: session-53.scope: Deactivated successfully.
Oct 12 17:08:17 np0005481680 systemd[1]: session-53.scope: Consumed 1min 8.358s CPU time.
Oct 12 17:08:17 np0005481680 systemd-logind[783]: Session 53 logged out. Waiting for processes to exit.
Oct 12 17:08:17 np0005481680 systemd-logind[783]: Removed session 53.
Oct 12 17:08:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:17 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:17 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:08:18
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', '.mgr']
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:08:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:08:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.307 164459 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.307 164459 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.307 164459 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.308 164459 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.309 164459 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.310 164459 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.311 164459 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.312 164459 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.313 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.314 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.315 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.316 164459 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.317 164459 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.318 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.319 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.320 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.321 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.322 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.323 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.324 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.325 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.326 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.327 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.327 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.327 164459 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.327 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.327 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.328 164459 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.329 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.330 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.331 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.332 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:18 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.333 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.334 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.335 164459 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.336 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.337 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.338 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.339 164459 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.348 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.349 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.349 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.349 164459 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.349 164459 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.363 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 4fd585ac-c8a3-45e9-b563-f151bc390e2e (UUID: 4fd585ac-c8a3-45e9-b563-f151bc390e2e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.389 164459 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.389 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.389 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.389 164459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.392 164459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.398 164459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.404 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '4fd585ac-c8a3-45e9-b563-f151bc390e2e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], external_ids={}, name=4fd585ac-c8a3-45e9-b563-f151bc390e2e, nb_cfg_timestamp=1760303225694, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.405 164459 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f8b01a0df70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.406 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.406 164459 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.406 164459 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.406 164459 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.412 164459 DEBUG oslo_service.service [-] Started child 164594 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.415 164459 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp2dci4wtg/privsep.sock']#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.418 164594 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-427489'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:08:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.463 164594 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.464 164594 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.464 164594 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.469 164594 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.479 164594 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 12 17:08:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:18.489 164594 INFO eventlet.wsgi.server [-] (164594) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 12 17:08:19 np0005481680 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.184 164459 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.186 164459 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2dci4wtg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.034 164600 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.043 164600 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.047 164600 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.047 164600 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164600#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.190 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[224a55db-d2e2-4a4e-af02-c23c34e715b7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:08:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:19.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:19.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:19 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.709 164600 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.710 164600 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:08:19 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:19.710 164600 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:08:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:19 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.234 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[8c094e22-5bf5-4c15-a523-07211e6dca75]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.239 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, column=external_ids, values=({'neutron:ovn-metadata-id': '03abe482-5630-5572-a10b-6b701b5bf336'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.286 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.319 164459 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.320 164459 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.320 164459 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.320 164459 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.320 164459 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.320 164459 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.321 164459 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.321 164459 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.321 164459 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.322 164459 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.322 164459 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.322 164459 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.322 164459 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.323 164459 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.323 164459 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.323 164459 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.324 164459 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.324 164459 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.324 164459 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.324 164459 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.324 164459 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.325 164459 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.325 164459 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.325 164459 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.325 164459 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.326 164459 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.326 164459 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.326 164459 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.327 164459 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.327 164459 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.327 164459 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.327 164459 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.328 164459 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.328 164459 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.328 164459 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.328 164459 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.329 164459 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.329 164459 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.329 164459 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.330 164459 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.330 164459 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.330 164459 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.330 164459 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.331 164459 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.331 164459 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.331 164459 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.331 164459 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.331 164459 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.332 164459 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.332 164459 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.332 164459 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.332 164459 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.333 164459 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.333 164459 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.333 164459 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.333 164459 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.333 164459 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.334 164459 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.334 164459 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.334 164459 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.335 164459 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.335 164459 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.335 164459 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.335 164459 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:20 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.336 164459 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.336 164459 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.336 164459 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.336 164459 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.337 164459 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.337 164459 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.337 164459 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.337 164459 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.338 164459 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.338 164459 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.338 164459 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.338 164459 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.338 164459 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.339 164459 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.339 164459 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.339 164459 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.339 164459 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.340 164459 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.340 164459 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.340 164459 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.340 164459 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.341 164459 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.341 164459 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.341 164459 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.341 164459 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.341 164459 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.342 164459 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.342 164459 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.342 164459 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.342 164459 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.342 164459 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.343 164459 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.343 164459 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.343 164459 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.344 164459 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.344 164459 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.344 164459 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.344 164459 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.344 164459 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.345 164459 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.345 164459 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.345 164459 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.345 164459 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.346 164459 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.346 164459 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.346 164459 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.347 164459 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.347 164459 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.347 164459 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.347 164459 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.347 164459 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.348 164459 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.348 164459 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.348 164459 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.348 164459 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.349 164459 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.349 164459 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.349 164459 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.349 164459 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.350 164459 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.350 164459 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.350 164459 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.350 164459 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.351 164459 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.351 164459 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.351 164459 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.351 164459 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.352 164459 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.352 164459 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.352 164459 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.352 164459 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.353 164459 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.353 164459 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.353 164459 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.354 164459 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.354 164459 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.354 164459 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.354 164459 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.354 164459 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.355 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.355 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.355 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.355 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.356 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.356 164459 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.356 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.356 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.356 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.357 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.357 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.357 164459 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.357 164459 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.358 164459 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.358 164459 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.358 164459 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.358 164459 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.358 164459 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.359 164459 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.359 164459 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.359 164459 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.359 164459 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.359 164459 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.360 164459 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.360 164459 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.360 164459 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.360 164459 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.361 164459 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.361 164459 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.361 164459 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.362 164459 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.362 164459 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.362 164459 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.362 164459 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.363 164459 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.363 164459 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.363 164459 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.363 164459 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.364 164459 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.364 164459 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.364 164459 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.364 164459 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.365 164459 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.365 164459 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.365 164459 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.365 164459 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.366 164459 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.366 164459 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.366 164459 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.366 164459 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.367 164459 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.367 164459 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.367 164459 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.367 164459 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.368 164459 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.369 164459 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.370 164459 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.371 164459 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.372 164459 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.373 164459 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.374 164459 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.375 164459 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.376 164459 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.377 164459 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.378 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.379 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.380 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.381 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.382 164459 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.383 164459 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.383 164459 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:08:20 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:08:20.383 164459 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 12 17:08:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 12 17:08:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210820 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:08:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:21 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:21 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:22] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:08:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:22] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:08:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:22 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:22 np0005481680 systemd-logind[783]: New session 54 of user zuul.
Oct 12 17:08:22 np0005481680 systemd[1]: Started Session 54 of User zuul.
Oct 12 17:08:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:23.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:23 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:23 np0005481680 python3.9[164762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:08:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:23 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:24 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:25 np0005481680 python3.9[164919]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:08:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:25 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:25 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:26 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:08:26 np0005481680 python3.9[165087]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:08:26 np0005481680 systemd[1]: Reloading.
Oct 12 17:08:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:27.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:08:27 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:08:27 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:08:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:27.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:27 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:27 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:28 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:08:28 np0005481680 python3.9[165274]: ansible-ansible.builtin.service_facts Invoked
Oct 12 17:08:28 np0005481680 network[165291]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:08:28 np0005481680 network[165292]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:08:28 np0005481680 network[165293]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:08:28 np0005481680 podman[165294]: 2025-10-12 21:08:28.666540172 +0000 UTC m=+0.143849048 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 12 17:08:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:08:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:29.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:30 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:08:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:32] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:08:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:32] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:08:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:08:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:08:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:08:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:08:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:08:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:08:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:33 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:33 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:34 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:34 np0005481680 python3.9[165588]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:08:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:35 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:08:35 np0005481680 python3.9[165741]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:08:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:35.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:08:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:35 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:08:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:08:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:35 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:36 np0005481680 python3.9[165896]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:36 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:08:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:37.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:08:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:08:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:37.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:08:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:37 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:37 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:38 np0005481680 python3.9[166080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:38 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:08:39 np0005481680 python3.9[166233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:39 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:39.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:39 np0005481680 python3.9[166388]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:39 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:40 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:08:40 np0005481680 python3.9[166541]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:08:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210840 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:08:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:41.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:41 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:41.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:41 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:08:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:08:42 np0005481680 python3.9[166696]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:42 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:08:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:43 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:43.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:43 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:44 np0005481680 python3.9[166848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:44 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:08:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:45 np0005481680 python3.9[167004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:45 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:45 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f73180045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:46 np0005481680 podman[167158]: 2025-10-12 21:08:46.135605036 +0000 UTC m=+0.097480800 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:08:46 np0005481680 python3.9[167159]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:46 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:08:47 np0005481680 python3.9[167330]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:47.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:08:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:47 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7314003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 12 17:08:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:47.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 12 17:08:47 np0005481680 python3.9[167486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:47 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:08:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:08:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:48 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0027e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:08:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:08:48 np0005481680 python3.9[167663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:08:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:49 np0005481680 python3.9[167872]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:49 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:49.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:49 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:08:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:49 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.069423983 +0000 UTC m=+0.075263334 container create fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.038171479 +0000 UTC m=+0.044010870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:50 np0005481680 systemd[1]: Started libpod-conmon-fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db.scope.
Oct 12 17:08:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.209779659 +0000 UTC m=+0.215619050 container init fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.222612847 +0000 UTC m=+0.228452198 container start fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.226564489 +0000 UTC m=+0.232403830 container attach fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:08:50 np0005481680 systemd[1]: libpod-fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db.scope: Deactivated successfully.
Oct 12 17:08:50 np0005481680 confident_moore[168133]: 167 167
Oct 12 17:08:50 np0005481680 conmon[168133]: conmon fc63356eb5109121767d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db.scope/container/memory.events
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.239317425 +0000 UTC m=+0.245156766 container died fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:08:50 np0005481680 python3.9[168128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a591d75c0524ae817caee9c02e30b867c8a3ca9e1129c8324b12938d55354e87-merged.mount: Deactivated successfully.
Oct 12 17:08:50 np0005481680 podman[168098]: 2025-10-12 21:08:50.308324319 +0000 UTC m=+0.314163660 container remove fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:08:50 np0005481680 systemd[1]: libpod-conmon-fc63356eb5109121767d765144326cb79a35916ab6ac36cdb633bd8833dcd3db.scope: Deactivated successfully.
Oct 12 17:08:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:50 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:08:50 np0005481680 podman[168205]: 2025-10-12 21:08:50.569046709 +0000 UTC m=+0.071533068 container create b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:08:50 np0005481680 systemd[1]: Started libpod-conmon-b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd.scope.
Oct 12 17:08:50 np0005481680 podman[168205]: 2025-10-12 21:08:50.538844915 +0000 UTC m=+0.041331274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:50 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:50 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:50 np0005481680 podman[168205]: 2025-10-12 21:08:50.703581639 +0000 UTC m=+0.206067968 container init b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:08:50 np0005481680 podman[168205]: 2025-10-12 21:08:50.718416873 +0000 UTC m=+0.220903232 container start b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 17:08:50 np0005481680 podman[168205]: 2025-10-12 21:08:50.722591873 +0000 UTC m=+0.225078232 container attach b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 17:08:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:51 np0005481680 python3.9[168331]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:51 np0005481680 heuristic_gates[168263]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:08:51 np0005481680 heuristic_gates[168263]: --> All data devices are unavailable
Oct 12 17:08:51 np0005481680 systemd[1]: libpod-b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd.scope: Deactivated successfully.
Oct 12 17:08:51 np0005481680 podman[168205]: 2025-10-12 21:08:51.189349228 +0000 UTC m=+0.691835577 container died b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4feccb1286e00f4089c920e478c12ab509adfdd31105d9b7dbcab2f0d14f9ebd-merged.mount: Deactivated successfully.
Oct 12 17:08:51 np0005481680 podman[168205]: 2025-10-12 21:08:51.264124557 +0000 UTC m=+0.766610916 container remove b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:08:51 np0005481680 systemd[1]: libpod-conmon-b7ff030fbf550766ff44467a8cd6672b8d8d91fe28a7d81ff9efdd80b3eb95bd.scope: Deactivated successfully.
Oct 12 17:08:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:51 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0027e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 12 17:08:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:51.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 12 17:08:51 np0005481680 python3.9[168563]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.001327131 +0000 UTC m=+0.076780448 container create d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:08:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:52 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:52] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:08:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:08:52] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:51.971899029 +0000 UTC m=+0.047352406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:52 np0005481680 systemd[1]: Started libpod-conmon-d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357.scope.
Oct 12 17:08:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.119716349 +0000 UTC m=+0.195169716 container init d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.133275436 +0000 UTC m=+0.208728753 container start d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.137750544 +0000 UTC m=+0.213203911 container attach d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:08:52 np0005481680 unruffled_shaw[168637]: 167 167
Oct 12 17:08:52 np0005481680 systemd[1]: libpod-d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357.scope: Deactivated successfully.
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.141930614 +0000 UTC m=+0.217383941 container died d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:08:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2085a138bd6adc4b8d1c872a1ed1e54669142cdaf331392775937d18f3dba160-merged.mount: Deactivated successfully.
Oct 12 17:08:52 np0005481680 podman[168596]: 2025-10-12 21:08:52.199594654 +0000 UTC m=+0.275047971 container remove d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shaw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:08:52 np0005481680 systemd[1]: libpod-conmon-d4718283434156eb55506bb2061f0d33737fd7b99600fd40cdeeadedee8d9357.scope: Deactivated successfully.
Oct 12 17:08:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:52 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.459037607 +0000 UTC m=+0.073313649 container create b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 17:08:52 np0005481680 systemd[1]: Started libpod-conmon-b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07.scope.
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.430762958 +0000 UTC m=+0.045039030 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38a11684703fab4a62ebd7d6176837f3b78bf9d371b22f8eb157598f0ff3ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38a11684703fab4a62ebd7d6176837f3b78bf9d371b22f8eb157598f0ff3ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38a11684703fab4a62ebd7d6176837f3b78bf9d371b22f8eb157598f0ff3ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca38a11684703fab4a62ebd7d6176837f3b78bf9d371b22f8eb157598f0ff3ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.571300059 +0000 UTC m=+0.185576131 container init b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.585146766 +0000 UTC m=+0.199422838 container start b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.589676095 +0000 UTC m=+0.203952207 container attach b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:08:52 np0005481680 python3.9[168809]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]: {
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:    "0": [
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:        {
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "devices": [
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "/dev/loop3"
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            ],
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "lv_name": "ceph_lv0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "lv_size": "21470642176",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "name": "ceph_lv0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "tags": {
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.cluster_name": "ceph",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.crush_device_class": "",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.encrypted": "0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.osd_id": "0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.type": "block",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.vdo": "0",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:                "ceph.with_tpm": "0"
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            },
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "type": "block",
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:            "vg_name": "ceph_vg0"
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:        }
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]:    ]
Oct 12 17:08:52 np0005481680 awesome_diffie[168797]: }
Oct 12 17:08:52 np0005481680 systemd[1]: libpod-b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07.scope: Deactivated successfully.
Oct 12 17:08:52 np0005481680 podman[168737]: 2025-10-12 21:08:52.980678502 +0000 UTC m=+0.594954544 container died b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:08:53 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ca38a11684703fab4a62ebd7d6176837f3b78bf9d371b22f8eb157598f0ff3ea-merged.mount: Deactivated successfully.
Oct 12 17:08:53 np0005481680 podman[168737]: 2025-10-12 21:08:53.055784232 +0000 UTC m=+0.670060294 container remove b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:53 np0005481680 systemd[1]: libpod-conmon-b62c24e3d74f417a9cba64a0780a956c37e3267cbc0bf92f101674f5546a4e07.scope: Deactivated successfully.
Oct 12 17:08:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:53 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:53.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:53 np0005481680 python3.9[169029]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:53 np0005481680 podman[169112]: 2025-10-12 21:08:53.889575209 +0000 UTC m=+0.077102828 container create e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:08:53 np0005481680 systemd[1]: Started libpod-conmon-e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc.scope.
Oct 12 17:08:53 np0005481680 podman[169112]: 2025-10-12 21:08:53.853761894 +0000 UTC m=+0.041289533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:53 np0005481680 podman[169112]: 2025-10-12 21:08:53.994773779 +0000 UTC m=+0.182301438 container init e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c001d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:54 np0005481680 podman[169112]: 2025-10-12 21:08:54.007276267 +0000 UTC m=+0.194803886 container start e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:08:54 np0005481680 podman[169112]: 2025-10-12 21:08:54.01191773 +0000 UTC m=+0.199445409 container attach e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:08:54 np0005481680 hungry_visvesvaraya[169167]: 167 167
Oct 12 17:08:54 np0005481680 systemd[1]: libpod-e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc.scope: Deactivated successfully.
Oct 12 17:08:54 np0005481680 conmon[169167]: conmon e45cf3900742e8159b46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc.scope/container/memory.events
Oct 12 17:08:54 np0005481680 podman[169112]: 2025-10-12 21:08:54.016143791 +0000 UTC m=+0.203671410 container died e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-784fba0f18f978d0e8e8767d6a6ef93655d932088bf868fe6ea8570ec80237ea-merged.mount: Deactivated successfully.
Oct 12 17:08:54 np0005481680 podman[169112]: 2025-10-12 21:08:54.068539469 +0000 UTC m=+0.256067098 container remove e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:08:54 np0005481680 systemd[1]: libpod-conmon-e45cf3900742e8159b463e54222c1b4f3448c59f5cd9bfc3975c14b1015babdc.scope: Deactivated successfully.
Oct 12 17:08:54 np0005481680 podman[169267]: 2025-10-12 21:08:54.304369448 +0000 UTC m=+0.055986093 container create 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:08:54 np0005481680 systemd[1]: Started libpod-conmon-978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2.scope.
Oct 12 17:08:54 np0005481680 podman[169267]: 2025-10-12 21:08:54.278377543 +0000 UTC m=+0.029994198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:08:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:54 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:08:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f018e8f14ed501620cf1fe2f60975a28fad3a203db75835687791ed4f9d71e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f018e8f14ed501620cf1fe2f60975a28fad3a203db75835687791ed4f9d71e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f018e8f14ed501620cf1fe2f60975a28fad3a203db75835687791ed4f9d71e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:54 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f018e8f14ed501620cf1fe2f60975a28fad3a203db75835687791ed4f9d71e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:08:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:54 np0005481680 podman[169267]: 2025-10-12 21:08:54.436774276 +0000 UTC m=+0.188390941 container init 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:08:54 np0005481680 python3.9[169262]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:08:54 np0005481680 podman[169267]: 2025-10-12 21:08:54.450624892 +0000 UTC m=+0.202241547 container start 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:08:54 np0005481680 podman[169267]: 2025-10-12 21:08:54.456101089 +0000 UTC m=+0.207717784 container attach 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 17:08:55 np0005481680 lvm[169513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:08:55 np0005481680 lvm[169513]: VG ceph_vg0 finished
Oct 12 17:08:55 np0005481680 fervent_vaughan[169283]: {}
Oct 12 17:08:55 np0005481680 systemd[1]: libpod-978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2.scope: Deactivated successfully.
Oct 12 17:08:55 np0005481680 systemd[1]: libpod-978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2.scope: Consumed 1.415s CPU time.
Oct 12 17:08:55 np0005481680 podman[169267]: 2025-10-12 21:08:55.285364886 +0000 UTC m=+1.036981501 container died 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:08:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-92f018e8f14ed501620cf1fe2f60975a28fad3a203db75835687791ed4f9d71e-merged.mount: Deactivated successfully.
Oct 12 17:08:55 np0005481680 podman[169267]: 2025-10-12 21:08:55.341829612 +0000 UTC m=+1.093446237 container remove 978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:08:55 np0005481680 python3.9[169510]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:08:55 np0005481680 systemd[1]: libpod-conmon-978b3d293ba957e5bc20bd65d851336c41f9fffa5e1519c776da810f876e89b2.scope: Deactivated successfully.
Oct 12 17:08:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:08:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:08:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:55 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:55.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:08:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:56 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:08:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:56 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c001d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:56 np0005481680 python3.9[169705]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 17:08:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:08:57.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:08:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:57 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:57.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:57 np0005481680 python3.9[169883]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:08:57 np0005481680 systemd[1]: Reloading.
Oct 12 17:08:57 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:08:57 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:08:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:58 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:58 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:08:59 np0005481680 python3.9[170071]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:08:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:08:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:08:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:08:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:08:59 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:08:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:08:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:08:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:08:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:08:59 np0005481680 podman[170198]: 2025-10-12 21:08:59.74190355 +0000 UTC m=+0.124121121 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 12 17:08:59 np0005481680 python3.9[170247]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:00 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:09:00 np0005481680 python3.9[170406]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:01 np0005481680 python3.9[170560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:01 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:02 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:02] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:09:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:02] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:09:02 np0005481680 python3.9[170714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:02 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:03 np0005481680 python3.9[170867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:09:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:09:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:03 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:03 np0005481680 python3.9[171022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:09:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:04 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:04 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0033a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:05.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:05 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 12 17:09:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 12 17:09:05 np0005481680 python3.9[171177]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 12 17:09:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:06 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:06 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:06 np0005481680 python3.9[171330]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 17:09:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:07.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:09:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:07.029Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:09:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:07.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:07 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:07.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:08 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:08 np0005481680 python3.9[171490]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 12 17:09:08 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:09:08 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:09:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:08 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:09.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:09 np0005481680 python3.9[171652]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:09:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:09 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:09.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:10 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:10 np0005481680 python3.9[171738]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:09:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:10 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:09:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:11.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:11 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f0001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:12] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:09:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:12] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:09:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:12 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:12 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:13 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:14 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:14 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:15 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:15.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:16 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:16 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:16 np0005481680 podman[171780]: 2025-10-12 21:09:16.815458994 +0000 UTC m=+0.090145860 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 12 17:09:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:17.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:09:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:17.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:09:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:17.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:17 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f730c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:18 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:09:18
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['backups', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs', 'images', 'default.rgw.meta', '.mgr']
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:09:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:09:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:09:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:09:18.342 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:09:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:09:18.343 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:09:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:09:18.343 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:09:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:18 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:09:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:09:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:19 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:20 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:20 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 17:09:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:20 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:09:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:21 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:21.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:22 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:22 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:23 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e4000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 12 17:09:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 12 17:09:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:24 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:24 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:25 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:26 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:26 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:27.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:27.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:27 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 12 17:09:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:27.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 12 17:09:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:28 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:28 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:29.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:29 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72f4004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:30 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72fc0047d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:30 np0005481680 podman[171990]: 2025-10-12 21:09:30.140805762 +0000 UTC m=+0.105683196 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 12 17:09:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:30 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7318001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:09:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:31.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:31 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:31.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:32 np0005481680 kernel: ganesha.nfsd[171843]: segfault at 50 ip 00007f73cb20932e sp 00007f73967fb210 error 4 in libntirpc.so.5.8[7f73cb1ee000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 12 17:09:32 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:09:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[154305]: 12/10/2025 21:09:32 : epoch 68ec186e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f72e40016a0 fd 48 proxy ignored for local
Oct 12 17:09:32 np0005481680 systemd[1]: Started Process Core Dump (PID 172025/UID 0).
Oct 12 17:09:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:09:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:09:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:33.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:34 np0005481680 systemd-coredump[172026]: Process 154309 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 64:#012#0  0x00007f73cb20932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:09:34 np0005481680 systemd[1]: systemd-coredump@5-172025-0.service: Deactivated successfully.
Oct 12 17:09:34 np0005481680 systemd[1]: systemd-coredump@5-172025-0.service: Consumed 1.316s CPU time.
Oct 12 17:09:35 np0005481680 podman[172033]: 2025-10-12 21:09:35.030236052 +0000 UTC m=+0.047510610 container died 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:09:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-34c4cc1977cf9ade1f902a89110783d0d662bd7fd3e1494e849cbf545dcc3c87-merged.mount: Deactivated successfully.
Oct 12 17:09:35 np0005481680 podman[172033]: 2025-10-12 21:09:35.11298234 +0000 UTC m=+0.130256898 container remove 8e2f6fe04d1d37f2887bda99e46e60e39ad7bd709df44568ccbc9aec8ddd6ce7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:09:35 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:09:35 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:09:35 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.273s CPU time.
Oct 12 17:09:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:35.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:37.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:09:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=cleanup t=2025-10-12T21:09:39.148504414Z level=info msg="Completed cleanup jobs" duration=19.560682ms
Oct 12 17:09:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugins.update.checker t=2025-10-12T21:09:39.280652334Z level=info msg="Update check succeeded" duration=51.624295ms
Oct 12 17:09:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana.update.checker t=2025-10-12T21:09:39.298719268Z level=info msg="Update check succeeded" duration=46.989935ms
Oct 12 17:09:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/210940 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:09:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:09:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:41.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:41.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:09:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:09:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:09:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:43.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:09:43 np0005481680 kernel: SELinux:  Converting 2772 SID table entries...
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 17:09:43 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 17:09:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:09:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:43.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:09:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:09:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:45.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:45 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 6.
Oct 12 17:09:45 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:09:45 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 12 17:09:45 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.273s CPU time.
Oct 12 17:09:45 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:09:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:45.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:45 np0005481680 podman[172167]: 2025-10-12 21:09:45.878913843 +0000 UTC m=+0.067242087 container create 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:09:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:45 np0005481680 podman[172167]: 2025-10-12 21:09:45.845042864 +0000 UTC m=+0.033371158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:09:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32252db6a2f49e14bd3e50dab22d5a397b374df2707c8d7565c9864841fd8bde/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:09:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32252db6a2f49e14bd3e50dab22d5a397b374df2707c8d7565c9864841fd8bde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:09:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32252db6a2f49e14bd3e50dab22d5a397b374df2707c8d7565c9864841fd8bde/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:09:45 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32252db6a2f49e14bd3e50dab22d5a397b374df2707c8d7565c9864841fd8bde/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:09:45 np0005481680 podman[172167]: 2025-10-12 21:09:45.964684173 +0000 UTC m=+0.153012457 container init 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:09:45 np0005481680 podman[172167]: 2025-10-12 21:09:45.973536231 +0000 UTC m=+0.161864475 container start 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:09:45 np0005481680 bash[172167]: 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73
Oct 12 17:09:45 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:09:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:45 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:09:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:45 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:09:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:09:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:09:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:47.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:47 np0005481680 podman[172224]: 2025-10-12 21:09:47.128115437 +0000 UTC m=+0.084248967 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:09:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:47.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:09:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:09:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:09:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:09:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:09:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:09:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:49.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:09:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:09:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:09:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:09:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:09:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:09:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:09:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:52 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:09:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:52 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:09:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:09:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:09:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:09:53 np0005481680 kernel: SELinux:  Converting 2772 SID table entries...
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 17:09:53 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 17:09:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 937 B/s wr, 3 op/s
Oct 12 17:09:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:55.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:09:56 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 12 17:09:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 937 B/s wr, 3 op/s
Oct 12 17:09:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:09:57.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:09:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 17:09:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:09:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:09:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:09:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:57.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:09:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:09:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 937 B/s wr, 3 op/s
Oct 12 17:09:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:09:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:09:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:09:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:09:59 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:09:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:09:59 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:09:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:09:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:09:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:09:59.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 17:10:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:00 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:00 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:10:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:00 np0005481680 podman[172411]: 2025-10-12 21:10:00.894110398 +0000 UTC m=+0.150789659 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.438356839 +0000 UTC m=+0.091210351 container create 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:10:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.390431137 +0000 UTC m=+0.043284689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:01 np0005481680 systemd[1]: Started libpod-conmon-1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1.scope.
Oct 12 17:10:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.554737972 +0000 UTC m=+0.207591544 container init 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.565247157 +0000 UTC m=+0.218100659 container start 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.570594108 +0000 UTC m=+0.223447660 container attach 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:10:01 np0005481680 great_dijkstra[172522]: 167 167
Oct 12 17:10:01 np0005481680 systemd[1]: libpod-1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1.scope: Deactivated successfully.
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.574703407 +0000 UTC m=+0.227556909 container died 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:10:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-166157cba7d7c6602a5f98d4b3f37201c7c2b105de989919d593e62a5c2810af-merged.mount: Deactivated successfully.
Oct 12 17:10:01 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:01 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:10:01 np0005481680 podman[172506]: 2025-10-12 21:10:01.637245253 +0000 UTC m=+0.290098755 container remove 1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_dijkstra, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:10:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:01 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:01 np0005481680 systemd[1]: libpod-conmon-1c7bd6599c13b41405ae98f357d867e691a227267d542b7dd2e6e77b100095e1.scope: Deactivated successfully.
Oct 12 17:10:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:01.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:01 np0005481680 podman[172546]: 2025-10-12 21:10:01.926025451 +0000 UTC m=+0.079481413 container create 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:10:01 np0005481680 podman[172546]: 2025-10-12 21:10:01.891605245 +0000 UTC m=+0.045061247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:01 np0005481680 systemd[1]: Started libpod-conmon-385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6.scope.
Oct 12 17:10:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:10:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:10:02 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:02 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:02 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211002 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:10:02 np0005481680 podman[172546]: 2025-10-12 21:10:02.077765254 +0000 UTC m=+0.231221226 container init 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:10:02 np0005481680 podman[172546]: 2025-10-12 21:10:02.091492735 +0000 UTC m=+0.244948687 container start 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:10:02 np0005481680 podman[172546]: 2025-10-12 21:10:02.096280482 +0000 UTC m=+0.249736434 container attach 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:10:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:02 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:10:02 np0005481680 gallant_curie[172564]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:10:02 np0005481680 gallant_curie[172564]: --> All data devices are unavailable
Oct 12 17:10:02 np0005481680 systemd[1]: libpod-385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6.scope: Deactivated successfully.
Oct 12 17:10:02 np0005481680 podman[172546]: 2025-10-12 21:10:02.576383624 +0000 UTC m=+0.729839616 container died 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:10:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bb81dc08362bfef2126d89dbb617f6471296b117c83796c44b9fa35035eb6bf1-merged.mount: Deactivated successfully.
Oct 12 17:10:02 np0005481680 podman[172546]: 2025-10-12 21:10:02.638215672 +0000 UTC m=+0.791671654 container remove 385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_curie, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:10:02 np0005481680 systemd[1]: libpod-conmon-385f1663071276fc479fa1d3b537353528c4303f9e4188391b4f84f118225ad6.scope: Deactivated successfully.
Oct 12 17:10:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:10:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:10:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.483595776 +0000 UTC m=+0.096315725 container create 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.428694571 +0000 UTC m=+0.041414590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:03 np0005481680 systemd[1]: Started libpod-conmon-6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad.scope.
Oct 12 17:10:03 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.606275834 +0000 UTC m=+0.218995813 container init 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.615919608 +0000 UTC m=+0.228639587 container start 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.621785422 +0000 UTC m=+0.234505381 container attach 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:10:03 np0005481680 confident_almeida[172700]: 167 167
Oct 12 17:10:03 np0005481680 systemd[1]: libpod-6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad.scope: Deactivated successfully.
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.625809248 +0000 UTC m=+0.238529227 container died 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:10:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:03 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:03 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6ed1a503fe6d3d9e7ef2294598be9142abd4dcbdc42bbb94efbea2c5d30da6ce-merged.mount: Deactivated successfully.
Oct 12 17:10:03 np0005481680 podman[172683]: 2025-10-12 21:10:03.684884183 +0000 UTC m=+0.297604162 container remove 6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Oct 12 17:10:03 np0005481680 systemd[1]: libpod-conmon-6f5cef84499149a793522a7b88f0f8f6f57c1702bb24c5dd3f109d8c2c5d38ad.scope: Deactivated successfully.
Oct 12 17:10:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:03.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:03 np0005481680 podman[172726]: 2025-10-12 21:10:03.928121453 +0000 UTC m=+0.073785253 container create 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:10:03 np0005481680 podman[172726]: 2025-10-12 21:10:03.896586933 +0000 UTC m=+0.042250783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:03 np0005481680 systemd[1]: Started libpod-conmon-041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6.scope.
Oct 12 17:10:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db56f2c6d43f03ee8daa6d519c6759dc2d24a84b4e8b8a83a235ba7c09e487e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db56f2c6d43f03ee8daa6d519c6759dc2d24a84b4e8b8a83a235ba7c09e487e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db56f2c6d43f03ee8daa6d519c6759dc2d24a84b4e8b8a83a235ba7c09e487e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db56f2c6d43f03ee8daa6d519c6759dc2d24a84b4e8b8a83a235ba7c09e487e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:04 np0005481680 podman[172726]: 2025-10-12 21:10:04.056732388 +0000 UTC m=+0.202396218 container init 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:10:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:04 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:04 np0005481680 podman[172726]: 2025-10-12 21:10:04.069159025 +0000 UTC m=+0.214822795 container start 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:10:04 np0005481680 podman[172726]: 2025-10-12 21:10:04.09673907 +0000 UTC m=+0.242402870 container attach 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]: {
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:    "0": [
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:        {
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "devices": [
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "/dev/loop3"
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            ],
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "lv_name": "ceph_lv0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "lv_size": "21470642176",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "name": "ceph_lv0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "tags": {
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.cluster_name": "ceph",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.crush_device_class": "",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.encrypted": "0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.osd_id": "0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.type": "block",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.vdo": "0",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:                "ceph.with_tpm": "0"
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            },
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "type": "block",
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:            "vg_name": "ceph_vg0"
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:        }
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]:    ]
Oct 12 17:10:04 np0005481680 stupefied_hawking[172742]: }
Oct 12 17:10:04 np0005481680 systemd[1]: libpod-041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6.scope: Deactivated successfully.
Oct 12 17:10:04 np0005481680 podman[172726]: 2025-10-12 21:10:04.435099783 +0000 UTC m=+0.580763583 container died 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:10:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:04 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:10:04 np0005481680 systemd[1]: var-lib-containers-storage-overlay-db56f2c6d43f03ee8daa6d519c6759dc2d24a84b4e8b8a83a235ba7c09e487e9-merged.mount: Deactivated successfully.
Oct 12 17:10:04 np0005481680 podman[172726]: 2025-10-12 21:10:04.688937213 +0000 UTC m=+0.834601003 container remove 041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 17:10:04 np0005481680 systemd[1]: libpod-conmon-041da475ee8a054c55b7dfcbd669e6afaca155934fda309261de247c7521b4e6.scope: Deactivated successfully.
Oct 12 17:10:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:05.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.496098772 +0000 UTC m=+0.040802774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.605742317 +0000 UTC m=+0.150446239 container create 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:10:05 np0005481680 systemd[1]: Started libpod-conmon-182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd.scope.
Oct 12 17:10:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:05 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:05 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:10:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.787783137 +0000 UTC m=+0.332487109 container init 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.799802864 +0000 UTC m=+0.344506816 container start 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:10:05 np0005481680 loving_feistel[173234]: 167 167
Oct 12 17:10:05 np0005481680 systemd[1]: libpod-182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd.scope: Deactivated successfully.
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.829672839 +0000 UTC m=+0.374376851 container attach 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:10:05 np0005481680 podman[173132]: 2025-10-12 21:10:05.830417079 +0000 UTC m=+0.375121031 container died 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:10:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:06 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-082b17020a44ad885168f5a5d9e40aef9d423baa4441bd40477d52673279caec-merged.mount: Deactivated successfully.
Oct 12 17:10:06 np0005481680 podman[173132]: 2025-10-12 21:10:06.239712129 +0000 UTC m=+0.784416091 container remove 182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:10:06 np0005481680 systemd[1]: libpod-conmon-182f7f2f921ed871b63c52938b0c05a9c5e2e76c26880d142e72803b913c49cd.scope: Deactivated successfully.
Oct 12 17:10:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:06 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:10:06 np0005481680 podman[173531]: 2025-10-12 21:10:06.456676168 +0000 UTC m=+0.039751907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:10:06 np0005481680 podman[173531]: 2025-10-12 21:10:06.583487524 +0000 UTC m=+0.166563193 container create f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:10:06 np0005481680 systemd[1]: Started libpod-conmon-f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441.scope.
Oct 12 17:10:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:10:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3144c6c567227d05cca87ab3f946e9120e4026803a455920ebb0f3e804a0920b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3144c6c567227d05cca87ab3f946e9120e4026803a455920ebb0f3e804a0920b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3144c6c567227d05cca87ab3f946e9120e4026803a455920ebb0f3e804a0920b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3144c6c567227d05cca87ab3f946e9120e4026803a455920ebb0f3e804a0920b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:10:06 np0005481680 podman[173531]: 2025-10-12 21:10:06.724377162 +0000 UTC m=+0.307452871 container init f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:10:06 np0005481680 podman[173531]: 2025-10-12 21:10:06.734851517 +0000 UTC m=+0.317927186 container start f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:10:06 np0005481680 podman[173531]: 2025-10-12 21:10:06.740354212 +0000 UTC m=+0.323429951 container attach f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:10:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:07.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:10:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:10:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:07.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:07 np0005481680 lvm[174074]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:10:07 np0005481680 lvm[174074]: VG ceph_vg0 finished
Oct 12 17:10:07 np0005481680 unruffled_cohen[173638]: {}
Oct 12 17:10:07 np0005481680 systemd[1]: libpod-f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441.scope: Deactivated successfully.
Oct 12 17:10:07 np0005481680 systemd[1]: libpod-f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441.scope: Consumed 1.443s CPU time.
Oct 12 17:10:07 np0005481680 podman[173531]: 2025-10-12 21:10:07.607700806 +0000 UTC m=+1.190776455 container died f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:10:07 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3144c6c567227d05cca87ab3f946e9120e4026803a455920ebb0f3e804a0920b-merged.mount: Deactivated successfully.
Oct 12 17:10:07 np0005481680 podman[173531]: 2025-10-12 21:10:07.659771246 +0000 UTC m=+1.242846885 container remove f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:10:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:07 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:07 np0005481680 systemd[1]: libpod-conmon-f57ff89d277d47db40406530d35d528ad5a2a71771271a9385f9df58a0d00441.scope: Deactivated successfully.
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:10:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:08 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:08 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:10:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:09.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:09 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:10 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:10 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:10:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:11 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:10:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:10:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:10:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:10:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:12 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:12 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:13.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:13 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:13.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:14 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:14 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:10:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:10:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:15 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:16 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:16 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:17.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:17.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:10:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:17.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:10:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:17.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:17 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:17.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:18 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:18 np0005481680 podman[178861]: 2025-10-12 21:10:18.157157508 +0000 UTC m=+0.093219183 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:10:18
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', '.mgr', 'vms', 'default.rgw.log', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root']
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:10:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:10:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:10:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:10:18.344 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:10:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:10:18.345 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:10:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:10:18.345 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:18 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:10:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:19.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:19 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:19.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:20 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:20 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:10:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:21 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:22] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:10:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:22] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:10:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:22 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:22 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:10:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:10:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:23 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d400a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:24 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:24 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:25.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:25 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:26 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d400a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:26 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:27.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:10:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:27.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:27.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211027 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:10:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:27 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:27.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:28 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:28 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d400a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:10:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:29.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:10:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:29 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d400a310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:30 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:30 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:10:31 np0005481680 podman[184398]: 2025-10-12 21:10:31.178128545 +0000 UTC m=+0.130781692 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:10:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:31 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:32] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:10:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:32] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:10:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:32 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:32 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:10:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:10:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:10:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:33 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:34 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:34 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:35 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:36 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:36 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:37.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:10:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:37 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:37 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:10:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:38 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:38 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:10:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:39 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:40 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:40 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:10:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:40 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:10:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:40 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:10:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:41 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:10:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:10:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:42 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:42 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:10:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:43.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:43 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:43.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:44 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:10:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:44 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:44 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:10:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:45 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:46 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:10:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:47.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:47.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:10:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:10:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:47 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:47.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:48 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:10:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:10:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:48 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:10:49 np0005481680 podman[189921]: 2025-10-12 21:10:49.121333152 +0000 UTC m=+0.083091358 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 12 17:10:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:49.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211049 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:10:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:49 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:49.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:50 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:50 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:10:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:10:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:10:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:51 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:51.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:10:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:10:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:10:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:52 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:52 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:10:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:53 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:53.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:54 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:54 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:10:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:10:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:55.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:10:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:55 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98ac002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:10:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:55.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:10:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:56 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:10:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:56 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:10:56 np0005481680 kernel: SELinux:  Converting 2773 SID table entries...
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability network_peer_controls=1
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability open_perms=1
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability extended_socket_class=1
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability always_check_network=0
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 12 17:10:56 np0005481680 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 12 17:10:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:57.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:10:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:57.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:10:57.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:10:57 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 12 17:10:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:57 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:57.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98d0004550 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:58 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98b0003870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:10:58 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 17:10:58 np0005481680 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Oct 12 17:10:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:10:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:10:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:10:59 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:10:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:10:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:10:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:10:59.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:00 np0005481680 kernel: ganesha.nfsd[189947]: segfault at 50 ip 00007f9980cb032e sp 00007f993a7fb210 error 4 in libntirpc.so.5.8[7f9980c95000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 12 17:11:00 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:11:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[172182]: 12/10/2025 21:11:00 : epoch 68ec1919 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98a4003c10 fd 38 proxy ignored for local
Oct 12 17:11:00 np0005481680 systemd[1]: Started Process Core Dump (PID 190010/UID 0).
Oct 12 17:11:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:11:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:01.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:01.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:11:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:11:02 np0005481680 podman[190034]: 2025-10-12 21:11:02.226322965 +0000 UTC m=+0.174223028 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 12 17:11:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:03.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:03.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:11:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:05.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:07.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:11:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:07.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:11:07 np0005481680 ceph-mds[96289]: mds.beacon.cephfs.compute-0.nlzxsf missed beacon ack from the monitors
Oct 12 17:11:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:07.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:07.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:11 np0005481680 ceph-mds[96289]: mds.beacon.cephfs.compute-0.nlzxsf missed beacon ack from the monitors
Oct 12 17:11:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:12] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:11:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:12] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:11:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:13.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).mds e11 check_health: resetting beacon timeouts due to mon delay (slow election?) of 13.9175 seconds
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:15 np0005481680 systemd-coredump[190011]: Process 172186 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 59:#012#0  0x00007f9980cb032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:11:15 np0005481680 ceph-mds[96289]: mds.beacon.cephfs.compute-0.nlzxsf missed beacon ack from the monitors
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 12 17:11:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:15 np0005481680 systemd[1]: systemd-coredump@6-190010-0.service: Deactivated successfully.
Oct 12 17:11:15 np0005481680 systemd[1]: systemd-coredump@6-190010-0.service: Consumed 1.296s CPU time.
Oct 12 17:11:15 np0005481680 podman[190165]: 2025-10-12 21:11:15.629214371 +0000 UTC m=+0.056196323 container died 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsid 5adb8c35-1b74-5730-a252-62321f654cd5
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : last_changed 2025-10-12T20:56:25.747024+0000
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : created 2025-10-12T20:54:15.161334+0000
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vonnzo=up:active} 2 up:standby
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.fmjeht(active, since 11m), standbys: compute-2.iamnla, compute-1.orllvh
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-32252db6a2f49e14bd3e50dab22d5a397b374df2707c8d7565c9864841fd8bde-merged.mount: Deactivated successfully.
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:11:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:15.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:11:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:11:16 np0005481680 podman[190165]: 2025-10-12 21:11:16.106784039 +0000 UTC m=+0.533765921 container remove 36c1c1d799d5650e04742319e23e3787a33b573d2eb4f289f457e70afaca6f73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:11:16 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:11:16 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:11:16 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.941s CPU time.
Oct 12 17:11:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:16 np0005481680 podman[190301]: 2025-10-12 21:11:16.701142631 +0000 UTC m=+0.123895163 container create c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:16 np0005481680 podman[190301]: 2025-10-12 21:11:16.621655918 +0000 UTC m=+0.044408430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: mon.compute-1 calling monitor election
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: mon.compute-2 calling monitor election
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: mon.compute-0 calling monitor election
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:11:16 np0005481680 systemd[1]: Started libpod-conmon-c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e.scope.
Oct 12 17:11:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:16 np0005481680 podman[190301]: 2025-10-12 21:11:16.96476967 +0000 UTC m=+0.387522252 container init c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:11:16 np0005481680 podman[190301]: 2025-10-12 21:11:16.977385295 +0000 UTC m=+0.400137817 container start c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:11:16 np0005481680 competent_ganguly[190317]: 167 167
Oct 12 17:11:16 np0005481680 systemd[1]: libpod-c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e.scope: Deactivated successfully.
Oct 12 17:11:17 np0005481680 podman[190301]: 2025-10-12 21:11:17.05937202 +0000 UTC m=+0.482124602 container attach c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 17:11:17 np0005481680 podman[190301]: 2025-10-12 21:11:17.060923469 +0000 UTC m=+0.483675991 container died c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:11:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:17.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:11:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-61214550be561da7f3cf3ac8464714a42501d28214597d43b2c21d69bfa79045-merged.mount: Deactivated successfully.
Oct 12 17:11:17 np0005481680 podman[190301]: 2025-10-12 21:11:17.501736829 +0000 UTC m=+0.924489361 container remove c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ganguly, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:17 np0005481680 systemd[1]: libpod-conmon-c4f44be95ba028108b09a61c2514d143b639173771d54d6e6e72b7dcb9169d6e.scope: Deactivated successfully.
Oct 12 17:11:17 np0005481680 podman[190379]: 2025-10-12 21:11:17.743949114 +0000 UTC m=+0.047136278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:17.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:17 np0005481680 podman[190379]: 2025-10-12 21:11:17.865952668 +0000 UTC m=+0.169139762 container create ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:11:17 np0005481680 systemd[1]: Started libpod-conmon-ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7.scope.
Oct 12 17:11:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:18 np0005481680 podman[190379]: 2025-10-12 21:11:18.107513387 +0000 UTC m=+0.410700501 container init ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:18 np0005481680 podman[190379]: 2025-10-12 21:11:18.1225027 +0000 UTC m=+0.425689794 container start ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:11:18 np0005481680 podman[190379]: 2025-10-12 21:11:18.169144795 +0000 UTC m=+0.472331939 container attach ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:11:18
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.log', 'vms', '.nfs', 'backups', 'cephfs.cephfs.meta']
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:11:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:11:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:11:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:11:18.344 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:11:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:11:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:11:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:11:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:11:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:18 np0005481680 sleepy_chaum[190395]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:11:18 np0005481680 sleepy_chaum[190395]: --> All data devices are unavailable
Oct 12 17:11:18 np0005481680 systemd[1]: libpod-ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7.scope: Deactivated successfully.
Oct 12 17:11:18 np0005481680 podman[190379]: 2025-10-12 21:11:18.561039625 +0000 UTC m=+0.864226729 container died ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d49c0c3d2fa6b7407892b4b2639d1b6cf1d441ae80f61b7bbfa28d4252533ede-merged.mount: Deactivated successfully.
Oct 12 17:11:19 np0005481680 podman[190379]: 2025-10-12 21:11:19.10303935 +0000 UTC m=+1.406226444 container remove ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:11:19 np0005481680 systemd[1]: libpod-conmon-ed7588373f2637ebc71978ff06357c20aa76803d82b956931764cde5c1f99fd7.scope: Deactivated successfully.
Oct 12 17:11:19 np0005481680 podman[190448]: 2025-10-12 21:11:19.426517722 +0000 UTC m=+0.111789411 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 12 17:11:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:19.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:20 np0005481680 podman[190537]: 2025-10-12 21:11:20.029189871 +0000 UTC m=+0.107581796 container create 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:11:20 np0005481680 podman[190537]: 2025-10-12 21:11:19.954371594 +0000 UTC m=+0.032763559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211120 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:11:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:20 np0005481680 systemd[1]: Started libpod-conmon-011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797.scope.
Oct 12 17:11:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:21 np0005481680 podman[190537]: 2025-10-12 21:11:21.846655525 +0000 UTC m=+1.925047540 container init 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:21 np0005481680 podman[190537]: 2025-10-12 21:11:21.861139597 +0000 UTC m=+1.939531562 container start 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:11:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:21 np0005481680 flamboyant_shockley[190565]: 167 167
Oct 12 17:11:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:21 np0005481680 systemd[1]: libpod-011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797.scope: Deactivated successfully.
Oct 12 17:11:21 np0005481680 podman[190537]: 2025-10-12 21:11:21.951369898 +0000 UTC m=+2.029761943 container attach 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:11:21 np0005481680 podman[190537]: 2025-10-12 21:11:21.952482656 +0000 UTC m=+2.030874611 container died 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:11:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:22] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Oct 12 17:11:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:22] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Oct 12 17:11:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ce9881e88dcab702e173984c50c896138e331a23cd9ecc69ff8976633d820c9e-merged.mount: Deactivated successfully.
Oct 12 17:11:22 np0005481680 podman[190537]: 2025-10-12 21:11:22.277248421 +0000 UTC m=+2.355640386 container remove 011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_shockley, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:11:22 np0005481680 systemd[1]: libpod-conmon-011c44340fcb873ea740bb218e623d8015a133899a77a502d051a77328963797.scope: Deactivated successfully.
Oct 12 17:11:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:22 np0005481680 podman[190644]: 2025-10-12 21:11:22.537939136 +0000 UTC m=+0.063213638 container create 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:11:22 np0005481680 systemd[1]: Started libpod-conmon-6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08.scope.
Oct 12 17:11:22 np0005481680 podman[190644]: 2025-10-12 21:11:22.510253725 +0000 UTC m=+0.035528247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2239d90bd391b9ba3b5f4666cda832551d18579a7b47c040ba9134e9556bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2239d90bd391b9ba3b5f4666cda832551d18579a7b47c040ba9134e9556bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2239d90bd391b9ba3b5f4666cda832551d18579a7b47c040ba9134e9556bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2239d90bd391b9ba3b5f4666cda832551d18579a7b47c040ba9134e9556bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:22 np0005481680 podman[190644]: 2025-10-12 21:11:22.63666648 +0000 UTC m=+0.161941032 container init 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:11:22 np0005481680 podman[190644]: 2025-10-12 21:11:22.658358891 +0000 UTC m=+0.183633393 container start 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:11:22 np0005481680 podman[190644]: 2025-10-12 21:11:22.66913341 +0000 UTC m=+0.194407962 container attach 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:11:22 np0005481680 crazy_engelbart[190671]: {
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:    "0": [
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:        {
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "devices": [
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "/dev/loop3"
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            ],
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "lv_name": "ceph_lv0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "lv_size": "21470642176",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "name": "ceph_lv0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "tags": {
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.cluster_name": "ceph",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.crush_device_class": "",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.encrypted": "0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.osd_id": "0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.type": "block",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.vdo": "0",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:                "ceph.with_tpm": "0"
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            },
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "type": "block",
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:            "vg_name": "ceph_vg0"
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:        }
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]:    ]
Oct 12 17:11:23 np0005481680 crazy_engelbart[190671]: }
Oct 12 17:11:23 np0005481680 systemd[1]: libpod-6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08.scope: Deactivated successfully.
Oct 12 17:11:23 np0005481680 podman[190644]: 2025-10-12 21:11:23.062423705 +0000 UTC m=+0.587698217 container died 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:11:23 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ff2239d90bd391b9ba3b5f4666cda832551d18579a7b47c040ba9134e9556bb2-merged.mount: Deactivated successfully.
Oct 12 17:11:23 np0005481680 podman[190644]: 2025-10-12 21:11:23.141168519 +0000 UTC m=+0.666443021 container remove 6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:23 np0005481680 systemd[1]: libpod-conmon-6f98c17308c123fc1562e608563fa2cc91240cb226aca8bf8640919f1eea2b08.scope: Deactivated successfully.
Oct 12 17:11:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:23.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.001852968 +0000 UTC m=+0.046743348 container create 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:11:24 np0005481680 systemd[1]: Started libpod-conmon-9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469.scope.
Oct 12 17:11:24 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:23.981471529 +0000 UTC m=+0.026361899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.100669763 +0000 UTC m=+0.145560213 container init 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.109304989 +0000 UTC m=+0.154195379 container start 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:11:24 np0005481680 condescending_ganguly[190905]: 167 167
Oct 12 17:11:24 np0005481680 systemd[1]: libpod-9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469.scope: Deactivated successfully.
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.121022492 +0000 UTC m=+0.165912882 container attach 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.121515774 +0000 UTC m=+0.166406164 container died 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:11:24 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8f584b65f6c4315a282378a89e4f98f5f17c527ed1a65c6de2ab04ea8b33b4fc-merged.mount: Deactivated successfully.
Oct 12 17:11:24 np0005481680 podman[190889]: 2025-10-12 21:11:24.190001243 +0000 UTC m=+0.234891633 container remove 9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:11:24 np0005481680 systemd[1]: libpod-conmon-9028598de380f2bfe7cd0730c51b8e50c32b2a85829a4e6ea0b87fa3e5f51469.scope: Deactivated successfully.
Oct 12 17:11:24 np0005481680 podman[190930]: 2025-10-12 21:11:24.481577119 +0000 UTC m=+0.071351072 container create 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:11:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:24 np0005481680 systemd[1]: Started libpod-conmon-012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6.scope.
Oct 12 17:11:24 np0005481680 podman[190930]: 2025-10-12 21:11:24.451409146 +0000 UTC m=+0.041183169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:24 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:11:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05170d392e2c945ea872cc19c9f71a33bf8a0a69bdf50e4b0a12f6a6eed5b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05170d392e2c945ea872cc19c9f71a33bf8a0a69bdf50e4b0a12f6a6eed5b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05170d392e2c945ea872cc19c9f71a33bf8a0a69bdf50e4b0a12f6a6eed5b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05170d392e2c945ea872cc19c9f71a33bf8a0a69bdf50e4b0a12f6a6eed5b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:24 np0005481680 podman[190930]: 2025-10-12 21:11:24.605031419 +0000 UTC m=+0.194805442 container init 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:11:24 np0005481680 podman[190930]: 2025-10-12 21:11:24.616959767 +0000 UTC m=+0.206733750 container start 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:11:24 np0005481680 podman[190930]: 2025-10-12 21:11:24.621731056 +0000 UTC m=+0.211505039 container attach 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:11:25 np0005481680 lvm[191021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:11:25 np0005481680 lvm[191021]: VG ceph_vg0 finished
Oct 12 17:11:25 np0005481680 gracious_chebyshev[190946]: {}
Oct 12 17:11:25 np0005481680 systemd[1]: libpod-012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6.scope: Deactivated successfully.
Oct 12 17:11:25 np0005481680 podman[190930]: 2025-10-12 21:11:25.489753997 +0000 UTC m=+1.079527950 container died 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:11:25 np0005481680 systemd[1]: libpod-012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6.scope: Consumed 1.483s CPU time.
Oct 12 17:11:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f05170d392e2c945ea872cc19c9f71a33bf8a0a69bdf50e4b0a12f6a6eed5b87-merged.mount: Deactivated successfully.
Oct 12 17:11:25 np0005481680 podman[190930]: 2025-10-12 21:11:25.538821742 +0000 UTC m=+1.128595685 container remove 012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chebyshev, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:11:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:25 np0005481680 systemd[1]: libpod-conmon-012dab885f51937c4aca756ee96e1b93ad701b023fa3d70e7b3b1a2391d35ce6.scope: Deactivated successfully.
Oct 12 17:11:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:11:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:11:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:11:26 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 7.
Oct 12 17:11:26 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:11:26 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.941s CPU time.
Oct 12 17:11:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:26 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:11:26 np0005481680 podman[191410]: 2025-10-12 21:11:26.869715894 +0000 UTC m=+0.053967358 container create 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:11:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362707ab0a42242865ed516ea3459a443863351074319270fcab999edb862a7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362707ab0a42242865ed516ea3459a443863351074319270fcab999edb862a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362707ab0a42242865ed516ea3459a443863351074319270fcab999edb862a7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5362707ab0a42242865ed516ea3459a443863351074319270fcab999edb862a7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:11:26 np0005481680 podman[191410]: 2025-10-12 21:11:26.939838514 +0000 UTC m=+0.124089998 container init 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:11:26 np0005481680 podman[191410]: 2025-10-12 21:11:26.846475654 +0000 UTC m=+0.030727118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:11:26 np0005481680 podman[191410]: 2025-10-12 21:11:26.947496805 +0000 UTC m=+0.131748259 container start 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:11:26 np0005481680 bash[191410]: 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678
Oct 12 17:11:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:26 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:11:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:26 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:11:26 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:27.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:27.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:11:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:27.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:27 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:27 np0005481680 systemd[1]: Stopping OpenSSH server daemon...
Oct 12 17:11:27 np0005481680 systemd[1]: sshd.service: Deactivated successfully.
Oct 12 17:11:27 np0005481680 systemd[1]: Stopped OpenSSH server daemon.
Oct 12 17:11:27 np0005481680 systemd[1]: sshd.service: Consumed 2.786s CPU time, no IO.
Oct 12 17:11:27 np0005481680 systemd[1]: Stopped target sshd-keygen.target.
Oct 12 17:11:27 np0005481680 systemd[1]: Stopping sshd-keygen.target...
Oct 12 17:11:27 np0005481680 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 17:11:27 np0005481680 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 17:11:27 np0005481680 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 12 17:11:27 np0005481680 systemd[1]: Reached target sshd-keygen.target.
Oct 12 17:11:27 np0005481680 systemd[1]: Starting OpenSSH server daemon...
Oct 12 17:11:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:27 np0005481680 systemd[1]: Started OpenSSH server daemon.
Oct 12 17:11:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:28 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54740016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:11:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:28 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5464000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:29.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:29 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:29.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211130 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:11:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:30 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 5 op/s
Oct 12 17:11:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:30 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54740016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:30 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 17:11:30 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 17:11:30 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:30 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:30 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:31 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 17:11:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:31 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:32] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Oct 12 17:11:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:32] "GET /metrics HTTP/1.1" 200 48185 "" "Prometheus/2.51.0"
Oct 12 17:11:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:32 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:11:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:32 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:32 np0005481680 systemd[1]: Starting PackageKit Daemon...
Oct 12 17:11:32 np0005481680 systemd[1]: Started PackageKit Daemon.
Oct 12 17:11:32 np0005481680 podman[193515]: 2025-10-12 21:11:32.981583153 +0000 UTC m=+0.147543052 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 12 17:11:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:11:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:11:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:33.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:33 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54740016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:34 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:11:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:34 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:34 np0005481680 python3.9[194759]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:11:34 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:34 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:34 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:35 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:35.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:36 np0005481680 python3.9[195952]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:11:36 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:36 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:36 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:36 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:11:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:36 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54740016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:11:37 np0005481680 python3.9[197282]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:11:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:37 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:37 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:37 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:37 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:38 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:11:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:38 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:38 np0005481680 python3.9[198471]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:11:38 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:39 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:39 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:39 np0005481680 auditd[701]: Audit daemon rotating log files
Oct 12 17:11:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:39.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:39 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54740016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:39.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:40 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:40 np0005481680 python3.9[199823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1023 B/s wr, 5 op/s
Oct 12 17:11:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:40 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:41 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:41.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:41 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:41 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:41 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:41.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:11:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:11:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:42 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:42 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 17:11:42 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 17:11:42 np0005481680 systemd[1]: man-db-cache-update.service: Consumed 14.338s CPU time.
Oct 12 17:11:42 np0005481680 systemd[1]: run-r3d59966827aa4a779d8229c6533d019e.service: Deactivated successfully.
Oct 12 17:11:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:42 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:42 np0005481680 python3.9[201545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:42 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:42 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:42 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:43.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:43 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:43.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:44 np0005481680 python3.9[201737]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:44 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:44 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:45 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:45 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:45 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:45.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:45 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0028a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:45.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:46 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:46 np0005481680 python3.9[201930]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:46 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:47.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:11:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:47.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:47 np0005481680 python3.9[202086]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:47 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:47 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:47.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:48 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:11:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:11:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:48 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:49 np0005481680 python3.9[202278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 12 17:11:49 np0005481680 systemd[1]: Reloading.
Oct 12 17:11:49 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:11:49 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:11:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:49.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:49 np0005481680 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 12 17:11:49 np0005481680 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 12 17:11:49 np0005481680 podman[202320]: 2025-10-12 21:11:49.682831257 +0000 UTC m=+0.082197631 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.734421) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509734463, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4235, "num_deletes": 502, "total_data_size": 8584162, "memory_usage": 8722856, "flush_reason": "Manual Compaction"}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 12 17:11:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:49 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54880091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509783563, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8328065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13085, "largest_seqno": 17319, "table_properties": {"data_size": 8310212, "index_size": 12043, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4741, "raw_key_size": 37319, "raw_average_key_size": 19, "raw_value_size": 8273245, "raw_average_value_size": 4393, "num_data_blocks": 526, "num_entries": 1883, "num_filter_entries": 1883, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303054, "oldest_key_time": 1760303054, "file_creation_time": 1760303509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 49209 microseconds, and 25080 cpu microseconds.
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.783627) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8328065 bytes OK
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.783654) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.785660) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.785683) EVENT_LOG_v1 {"time_micros": 1760303509785676, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.785706) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8567191, prev total WAL file size 8567191, number of live WAL files 2.
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.789127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8132KB)], [32(11MB)]
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509789195, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20760580, "oldest_snapshot_seqno": -1}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5055 keys, 15229565 bytes, temperature: kUnknown
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509888998, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15229565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15191185, "index_size": 24638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126510, "raw_average_key_size": 25, "raw_value_size": 15095074, "raw_average_value_size": 2986, "num_data_blocks": 1038, "num_entries": 5055, "num_filter_entries": 5055, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.889326) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15229565 bytes
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.890912) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.8 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.9 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.3) write-amplify(1.8) OK, records in: 6080, records dropped: 1025 output_compression: NoCompression
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.890943) EVENT_LOG_v1 {"time_micros": 1760303509890928, "job": 14, "event": "compaction_finished", "compaction_time_micros": 99922, "compaction_time_cpu_micros": 49230, "output_level": 6, "num_output_files": 1, "total_output_size": 15229565, "num_input_records": 6080, "num_output_records": 5055, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509893718, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303509898210, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.788982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.898298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.898307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.898310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.898313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:11:49.898316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:11:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:49.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:50 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:11:50 np0005481680 python3.9[202492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:50 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:51 np0005481680 python3.9[202648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:51.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:51 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:51.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:11:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:11:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:11:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:52 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54880091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:52 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:52 np0005481680 python3.9[202804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:53.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:53 np0005481680 python3.9[202960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:53 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:53.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:54 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:54 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:54 np0005481680 python3.9[203116]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:11:55 np0005481680 python3.9[203272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:55 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:11:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:55.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:11:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:56 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:56 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:56 np0005481680 python3.9[203428]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:11:57.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:11:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:11:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:57.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:11:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:57 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:57 np0005481680 python3.9[203607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:58 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:11:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:58 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:58 np0005481680 python3.9[203765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:11:59.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:11:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:11:59 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:11:59 np0005481680 python3.9[203922]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:11:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:11:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:11:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:11:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:00 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:12:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:00 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5450000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:00 np0005481680 python3.9[204078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:12:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:01 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:01.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:02 np0005481680 python3.9[204235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:12:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:02 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:02 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:03 np0005481680 python3.9[204390]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:12:03 np0005481680 podman[204391]: 2025-10-12 21:12:03.187147788 +0000 UTC m=+0.145007582 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:12:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:12:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:12:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:03.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:03 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:03.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:04 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:04 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:05 np0005481680 python3.9[204573]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 12 17:12:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:05 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:06 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:06 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:07.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:12:07 np0005481680 python3.9[204730]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:07 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:07 np0005481680 python3.9[204884]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:07.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:08 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:08 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:08 np0005481680 python3.9[205036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:09 np0005481680 python3.9[205189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:09 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:09.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:10 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54640036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:10 np0005481680 python3.9[205342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:12:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:10 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:11 np0005481680 python3.9[205494]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:12:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:11.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:11 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5450002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:11.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:12] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:12:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:12] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:12:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:12 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f545c0046b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:12 np0005481680 python3.9[205649]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:12 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5474000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:12 np0005481680 python3.9[205774]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303531.4471543-1622-145908567355806/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:13.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:13 np0005481680 python3.9[205927]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:13 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5488009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:13.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[191479]: 12/10/2025 21:12:14 : epoch 68ec197e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5450002b10 fd 38 proxy ignored for local
Oct 12 17:12:14 np0005481680 kernel: ganesha.nfsd[203923]: segfault at 50 ip 00007f5531d1432e sp 00007f54f3ffe210 error 4 in libntirpc.so.5.8[7f5531cf9000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 12 17:12:14 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:12:14 np0005481680 systemd[1]: Started Process Core Dump (PID 206052/UID 0).
Oct 12 17:12:14 np0005481680 python3.9[206054]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303533.1742742-1622-208671386062291/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:14 np0005481680 python3.9[206207]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:15 np0005481680 systemd-coredump[206055]: Process 191504 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f5531d1432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:12:15 np0005481680 systemd[1]: systemd-coredump@7-206052-0.service: Deactivated successfully.
Oct 12 17:12:15 np0005481680 systemd[1]: systemd-coredump@7-206052-0.service: Consumed 1.170s CPU time.
Oct 12 17:12:15 np0005481680 podman[206332]: 2025-10-12 21:12:15.507167821 +0000 UTC m=+0.048679535 container died 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:12:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5362707ab0a42242865ed516ea3459a443863351074319270fcab999edb862a7-merged.mount: Deactivated successfully.
Oct 12 17:12:15 np0005481680 podman[206332]: 2025-10-12 21:12:15.563105381 +0000 UTC m=+0.104617055 container remove 09d737b8e71acd932287cb11d74241b678f3eae0b512209bd36055ba259eb678 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:12:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:15.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:15 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:12:15 np0005481680 python3.9[206345]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303534.4994395-1622-63112445702916/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:15 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:12:15 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.704s CPU time.
Oct 12 17:12:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:16 np0005481680 python3.9[206533]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:12:17 np0005481680 python3.9[206658]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303535.9008555-1622-223792563624520/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:17.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:18 np0005481680 python3.9[206837]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:12:18
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr', '.rgw.root', 'vms', 'images', 'backups']
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:12:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:12:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:12:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:12:18.345 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:12:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:12:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:12:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:12:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:12:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:12:18 np0005481680 python3.9[206962]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303537.427539-1622-112309810306000/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:19.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:19 np0005481680 python3.9[207115]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:19.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:20 np0005481680 podman[207189]: 2025-10-12 21:12:20.11812777 +0000 UTC m=+0.074713888 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 12 17:12:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211220 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:12:20 np0005481680 python3.9[207260]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303539.0346558-1622-22572333730099/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:12:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:21 np0005481680 python3.9[207412]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:21.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:22 np0005481680 python3.9[207537]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303540.6738646-1622-140713152519796/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:12:23 np0005481680 python3.9[207689]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:23 np0005481680 python3.9[207816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760303542.414834-1622-170174807341839/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:23.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:12:24 np0005481680 python3.9[207968]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 12 17:12:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:25 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 8.
Oct 12 17:12:25 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:12:25 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.704s CPU time.
Oct 12 17:12:25 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:12:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:26 np0005481680 python3.9[208124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:26 np0005481680 podman[208213]: 2025-10-12 21:12:26.229989944 +0000 UTC m=+0.068281514 container create 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:12:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13735665cc5fe667d104b1ca700fbdf9423c1718e2b99281aaa46f564969a26d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13735665cc5fe667d104b1ca700fbdf9423c1718e2b99281aaa46f564969a26d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13735665cc5fe667d104b1ca700fbdf9423c1718e2b99281aaa46f564969a26d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:26 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13735665cc5fe667d104b1ca700fbdf9423c1718e2b99281aaa46f564969a26d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:26 np0005481680 podman[208213]: 2025-10-12 21:12:26.199949121 +0000 UTC m=+0.038240701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:26 np0005481680 podman[208213]: 2025-10-12 21:12:26.325428017 +0000 UTC m=+0.163719637 container init 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 17:12:26 np0005481680 podman[208213]: 2025-10-12 21:12:26.334763274 +0000 UTC m=+0.173054854 container start 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:26 np0005481680 bash[208213]: 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a
Oct 12 17:12:26 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:12:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:12:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:12:26 np0005481680 podman[208499]: 2025-10-12 21:12:26.910048845 +0000 UTC m=+0.101756763 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:12:26 np0005481680 python3.9[208498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:27 np0005481680 podman[208499]: 2025-10-12 21:12:27.055361994 +0000 UTC m=+0.247069902 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:12:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:27.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:12:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:27 np0005481680 podman[208774]: 2025-10-12 21:12:27.789230421 +0000 UTC m=+0.084454824 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:27 np0005481680 podman[208774]: 2025-10-12 21:12:27.806987102 +0000 UTC m=+0.102211505 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:27 np0005481680 python3.9[208760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:28 np0005481680 podman[208963]: 2025-10-12 21:12:28.324352085 +0000 UTC m=+0.117133255 container exec 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:12:28 np0005481680 podman[208963]: 2025-10-12 21:12:28.365321244 +0000 UTC m=+0.158102414 container exec_died 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 17:12:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:12:28 np0005481680 python3.9[209042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:28 np0005481680 podman[209083]: 2025-10-12 21:12:28.69136134 +0000 UTC m=+0.069013293 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:12:28 np0005481680 podman[209083]: 2025-10-12 21:12:28.701566499 +0000 UTC m=+0.079218472 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:12:28 np0005481680 podman[209223]: 2025-10-12 21:12:28.985575828 +0000 UTC m=+0.062630641 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived)
Oct 12 17:12:29 np0005481680 podman[209223]: 2025-10-12 21:12:29.005470043 +0000 UTC m=+0.082524856 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 12 17:12:29 np0005481680 podman[209367]: 2025-10-12 21:12:29.355938908 +0000 UTC m=+0.090020175 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:29 np0005481680 podman[209367]: 2025-10-12 21:12:29.403961788 +0000 UTC m=+0.138042995 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:29 np0005481680 python3.9[209365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:29 np0005481680 podman[209473]: 2025-10-12 21:12:29.71687664 +0000 UTC m=+0.086150377 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:12:29 np0005481680 podman[209473]: 2025-10-12 21:12:29.927382313 +0000 UTC m=+0.296656000 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:12:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:30 np0005481680 python3.9[209658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:30 np0005481680 podman[209726]: 2025-10-12 21:12:30.544153339 +0000 UTC m=+0.099269931 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:12:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:30 np0005481680 podman[209726]: 2025-10-12 21:12:30.603951827 +0000 UTC m=+0.159068389 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:12:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:12:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:12:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 python3.9[209949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:12:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:31.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:12:31 np0005481680 python3.9[210183]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.316008942 +0000 UTC m=+0.074815389 container create 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:32 np0005481680 systemd[1]: Started libpod-conmon-568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80.scope.
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.286177165 +0000 UTC m=+0.044983682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:32 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.42539705 +0000 UTC m=+0.184203547 container init 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.435193668 +0000 UTC m=+0.194000125 container start 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.439089317 +0000 UTC m=+0.197895814 container attach 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:12:32 np0005481680 bold_curran[210331]: 167 167
Oct 12 17:12:32 np0005481680 systemd[1]: libpod-568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80.scope: Deactivated successfully.
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.444929386 +0000 UTC m=+0.203735843 container died 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-de40557e90eacbb6007458758a412990d1b2f7ab7095f1b01fde749edbdab00a-merged.mount: Deactivated successfully.
Oct 12 17:12:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:12:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:12:32 np0005481680 podman[210279]: 2025-10-12 21:12:32.502367314 +0000 UTC m=+0.261173771 container remove 568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:12:32 np0005481680 systemd[1]: libpod-conmon-568b8dc79e6c1d97ef9f7d136231a18852ec21c9f7fb3435db25ea4d61c07e80.scope: Deactivated successfully.
Oct 12 17:12:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:12:32 np0005481680 podman[210419]: 2025-10-12 21:12:32.789770498 +0000 UTC m=+0.075658411 container create 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct 12 17:12:32 np0005481680 python3.9[210414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:32 np0005481680 podman[210419]: 2025-10-12 21:12:32.761933152 +0000 UTC m=+0.047821085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:32 np0005481680 systemd[1]: Started libpod-conmon-4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10.scope.
Oct 12 17:12:32 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:32 np0005481680 podman[210419]: 2025-10-12 21:12:32.924027116 +0000 UTC m=+0.209915069 container init 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:12:32 np0005481680 podman[210419]: 2025-10-12 21:12:32.939306984 +0000 UTC m=+0.225194897 container start 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:12:32 np0005481680 podman[210419]: 2025-10-12 21:12:32.944594078 +0000 UTC m=+0.230481981 container attach 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:12:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:12:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:12:33 np0005481680 relaxed_johnson[210435]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:12:33 np0005481680 relaxed_johnson[210435]: --> All data devices are unavailable
Oct 12 17:12:33 np0005481680 systemd[1]: libpod-4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10.scope: Deactivated successfully.
Oct 12 17:12:33 np0005481680 podman[210419]: 2025-10-12 21:12:33.392572228 +0000 UTC m=+0.678460141 container died 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:33 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d802954f69adeed19e850e3d66aeadf60b8d4680d86d6ca9bc780d4b393b1791-merged.mount: Deactivated successfully.
Oct 12 17:12:33 np0005481680 podman[210419]: 2025-10-12 21:12:33.455172548 +0000 UTC m=+0.741060461 container remove 4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 12 17:12:33 np0005481680 systemd[1]: libpod-conmon-4cb3e9c4e33ea686f2076b555a6cb3aa856e6cd3b95f221c8183e7b5620a5a10.scope: Deactivated successfully.
Oct 12 17:12:33 np0005481680 podman[210574]: 2025-10-12 21:12:33.570590888 +0000 UTC m=+0.168156850 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:12:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:33 np0005481680 python3.9[210632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.215605969 +0000 UTC m=+0.061806729 container create 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.193242142 +0000 UTC m=+0.039442982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:34 np0005481680 systemd[1]: Started libpod-conmon-27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913.scope.
Oct 12 17:12:34 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.348854361 +0000 UTC m=+0.195055181 container init 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.361134233 +0000 UTC m=+0.207335023 container start 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.365310409 +0000 UTC m=+0.211511239 container attach 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:12:34 np0005481680 affectionate_varahamihira[210901]: 167 167
Oct 12 17:12:34 np0005481680 systemd[1]: libpod-27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913.scope: Deactivated successfully.
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.369151147 +0000 UTC m=+0.215351937 container died 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:12:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-895d8c6803a4dc8a9861b66ef27cab18534782f8d3dfa1737d0f15331cad76cd-merged.mount: Deactivated successfully.
Oct 12 17:12:34 np0005481680 podman[210846]: 2025-10-12 21:12:34.421836614 +0000 UTC m=+0.268037394 container remove 27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_varahamihira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:12:34 np0005481680 systemd[1]: libpod-conmon-27f3188d871547d992c7cc59dd0484503836f50e3e3b4d98362714c3e2b44913.scope: Deactivated successfully.
Oct 12 17:12:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:12:34 np0005481680 python3.9[210905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:34 np0005481680 podman[210926]: 2025-10-12 21:12:34.651229786 +0000 UTC m=+0.078262787 container create 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:12:34 np0005481680 systemd[1]: Started libpod-conmon-4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb.scope.
Oct 12 17:12:34 np0005481680 podman[210926]: 2025-10-12 21:12:34.6206577 +0000 UTC m=+0.047690751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:34 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6c772dc8443a685d6ed373197a702ea8e892f577d27ab161e8364b05b7ebc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6c772dc8443a685d6ed373197a702ea8e892f577d27ab161e8364b05b7ebc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6c772dc8443a685d6ed373197a702ea8e892f577d27ab161e8364b05b7ebc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6c772dc8443a685d6ed373197a702ea8e892f577d27ab161e8364b05b7ebc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:34 np0005481680 podman[210926]: 2025-10-12 21:12:34.763577699 +0000 UTC m=+0.190610760 container init 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:12:34 np0005481680 podman[210926]: 2025-10-12 21:12:34.777435 +0000 UTC m=+0.204468001 container start 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:12:34 np0005481680 podman[210926]: 2025-10-12 21:12:34.787114145 +0000 UTC m=+0.214147206 container attach 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]: {
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:    "0": [
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:        {
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "devices": [
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "/dev/loop3"
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            ],
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "lv_name": "ceph_lv0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "lv_size": "21470642176",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "name": "ceph_lv0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "tags": {
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.cluster_name": "ceph",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.crush_device_class": "",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.encrypted": "0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.osd_id": "0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.type": "block",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.vdo": "0",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:                "ceph.with_tpm": "0"
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            },
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "type": "block",
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:            "vg_name": "ceph_vg0"
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:        }
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]:    ]
Oct 12 17:12:35 np0005481680 focused_bhaskara[210966]: }
Oct 12 17:12:35 np0005481680 systemd[1]: libpod-4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb.scope: Deactivated successfully.
Oct 12 17:12:35 np0005481680 podman[210926]: 2025-10-12 21:12:35.1583656 +0000 UTC m=+0.585398601 container died 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:12:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-da6c772dc8443a685d6ed373197a702ea8e892f577d27ab161e8364b05b7ebc0-merged.mount: Deactivated successfully.
Oct 12 17:12:35 np0005481680 podman[210926]: 2025-10-12 21:12:35.224040536 +0000 UTC m=+0.651073547 container remove 4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:12:35 np0005481680 systemd[1]: libpod-conmon-4ee2ce6dc4fffccd552787dd1e2684210311af2b7723c13ca461afb1ba20adcb.scope: Deactivated successfully.
Oct 12 17:12:35 np0005481680 python3.9[211115]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.612937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555612965, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 625, "num_deletes": 250, "total_data_size": 889059, "memory_usage": 901616, "flush_reason": "Manual Compaction"}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 12 17:12:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555618019, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 604598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17320, "largest_seqno": 17944, "table_properties": {"data_size": 601649, "index_size": 921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7622, "raw_average_key_size": 20, "raw_value_size": 595520, "raw_average_value_size": 1563, "num_data_blocks": 40, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303510, "oldest_key_time": 1760303510, "file_creation_time": 1760303555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5107 microseconds, and 1960 cpu microseconds.
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.618046) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 604598 bytes OK
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.618070) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.620027) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.620040) EVENT_LOG_v1 {"time_micros": 1760303555620036, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.620052) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 885766, prev total WAL file size 885766, number of live WAL files 2.
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.620574) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(590KB)], [35(14MB)]
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555620593, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 15834163, "oldest_snapshot_seqno": -1}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4940 keys, 12030274 bytes, temperature: kUnknown
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555685639, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12030274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11996683, "index_size": 20169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124501, "raw_average_key_size": 25, "raw_value_size": 11906552, "raw_average_value_size": 2410, "num_data_blocks": 842, "num_entries": 4940, "num_filter_entries": 4940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.685817) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12030274 bytes
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.687134) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.2 rd, 184.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(46.1) write-amplify(19.9) OK, records in: 5436, records dropped: 496 output_compression: NoCompression
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.687151) EVENT_LOG_v1 {"time_micros": 1760303555687142, "job": 16, "event": "compaction_finished", "compaction_time_micros": 65102, "compaction_time_cpu_micros": 22000, "output_level": 6, "num_output_files": 1, "total_output_size": 12030274, "num_input_records": 5436, "num_output_records": 4940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555687307, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303555689319, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.620497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.689370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.689377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.689380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.689382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:12:35.689385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:12:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.045437835 +0000 UTC m=+0.068050218 container create b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.019171218 +0000 UTC m=+0.041783641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:36 np0005481680 systemd[1]: Started libpod-conmon-b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3.scope.
Oct 12 17:12:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.186274871 +0000 UTC m=+0.208887294 container init b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.198342886 +0000 UTC m=+0.220955269 container start b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.203703462 +0000 UTC m=+0.226315835 container attach b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:36 np0005481680 recursing_lichterman[211378]: 167 167
Oct 12 17:12:36 np0005481680 systemd[1]: libpod-b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3.scope: Deactivated successfully.
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.210854224 +0000 UTC m=+0.233466597 container died b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:12:36 np0005481680 python3.9[211373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fd037de9a073863ebe2c1cbc9b85d46f5e052af318095cea24bd0723337190b6-merged.mount: Deactivated successfully.
Oct 12 17:12:36 np0005481680 podman[211352]: 2025-10-12 21:12:36.28124153 +0000 UTC m=+0.303853913 container remove b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:12:36 np0005481680 systemd[1]: libpod-conmon-b4d9fbf305b86b7a950dccb56344985ce0df30cd6148ec2c5c1e9bd6f0e83cc3.scope: Deactivated successfully.
Oct 12 17:12:36 np0005481680 podman[211427]: 2025-10-12 21:12:36.496704469 +0000 UTC m=+0.064411786 container create f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:12:36 np0005481680 systemd[1]: Started libpod-conmon-f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b.scope.
Oct 12 17:12:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:12:36 np0005481680 podman[211427]: 2025-10-12 21:12:36.468589116 +0000 UTC m=+0.036296503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:12:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:12:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588eafb9d7a87805eb7ff4a6390acbe3cd4d4c74cbf157daef83663680c7fabd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588eafb9d7a87805eb7ff4a6390acbe3cd4d4c74cbf157daef83663680c7fabd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588eafb9d7a87805eb7ff4a6390acbe3cd4d4c74cbf157daef83663680c7fabd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588eafb9d7a87805eb7ff4a6390acbe3cd4d4c74cbf157daef83663680c7fabd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:12:36 np0005481680 podman[211427]: 2025-10-12 21:12:36.607621545 +0000 UTC m=+0.175328892 container init f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:36 np0005481680 podman[211427]: 2025-10-12 21:12:36.624940975 +0000 UTC m=+0.192648322 container start f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:12:36 np0005481680 podman[211427]: 2025-10-12 21:12:36.62908485 +0000 UTC m=+0.196792197 container attach f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 17:12:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:37.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:12:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:37.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:12:37 np0005481680 python3.9[211583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:37 np0005481680 lvm[211704]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:12:37 np0005481680 lvm[211704]: VG ceph_vg0 finished
Oct 12 17:12:37 np0005481680 practical_dubinsky[211492]: {}
Oct 12 17:12:37 np0005481680 systemd[1]: libpod-f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b.scope: Deactivated successfully.
Oct 12 17:12:37 np0005481680 systemd[1]: libpod-f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b.scope: Consumed 1.421s CPU time.
Oct 12 17:12:37 np0005481680 podman[211427]: 2025-10-12 21:12:37.480537652 +0000 UTC m=+1.048244989 container died f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:12:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-588eafb9d7a87805eb7ff4a6390acbe3cd4d4c74cbf157daef83663680c7fabd-merged.mount: Deactivated successfully.
Oct 12 17:12:37 np0005481680 podman[211427]: 2025-10-12 21:12:37.545524121 +0000 UTC m=+1.113231468 container remove f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 12 17:12:37 np0005481680 systemd[1]: libpod-conmon-f2e8e7704fe66045e6cc85c22c1088fea7374eb3536fe371897be53264c0324b.scope: Deactivated successfully.
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:12:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:12:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211237 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:12:37 np0005481680 python3.9[211863]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:12:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:12:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e30000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:38 np0005481680 python3.9[211988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303557.3114731-2285-260102717610499/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:39 np0005481680 python3.9[212156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:12:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:39.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:12:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:39 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:40.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:40 np0005481680 python3.9[212280]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303558.8894138-2285-273862449463150/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:12:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:41 np0005481680 python3.9[212432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:41 np0005481680 python3.9[212556]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303560.4494727-2285-180382565855741/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:41 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:42.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:12:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:12:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211242 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:12:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:12:42 np0005481680 python3.9[212709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:43 np0005481680 python3.9[212832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303561.9612684-2285-148150111354237/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:43.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:43 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:43 np0005481680 python3.9[212986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:44.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:12:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:44 np0005481680 python3.9[213109]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303563.3865695-2285-33505533116743/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:45 np0005481680 python3.9[213262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:45.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:45 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:46 np0005481680 python3.9[213386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303564.8883429-2285-3735036416102/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 12 17:12:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:46 np0005481680 python3.9[213538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:47 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:12:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:47.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:12:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:47.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:12:47 np0005481680 python3.9[213662]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303566.307609-2285-111654089940737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:47.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:47 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:12:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:12:48 np0005481680 python3.9[213815]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:12:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 12 17:12:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:49 np0005481680 python3.9[213938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303567.7242827-2285-70666905467545/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:49 np0005481680 python3.9[214092]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:49 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:12:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:12:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:50.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:50 np0005481680 podman[214187]: 2025-10-12 21:12:50.301931872 +0000 UTC m=+0.091480803 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:12:50 np0005481680 python3.9[214232]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303569.215371-2285-58225180468779/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 12 17:12:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:51 np0005481680 python3.9[214387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:51 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:51 np0005481680 python3.9[214511]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303570.67948-2285-235631930299376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:52] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:12:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:12:52] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:12:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:12:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:52 np0005481680 python3.9[214663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:53 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:12:53 np0005481680 python3.9[214787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303572.1569347-2285-112778062957849/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:53 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:54.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:54 np0005481680 python3.9[214940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:12:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:54 np0005481680 python3.9[215063]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303573.6359413-2285-50050891806250/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:12:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:12:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:55.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:12:55 np0005481680 python3.9[215216]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:55 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:56.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:56 np0005481680 python3.9[215340]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303575.174175-2285-270275143137987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:12:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:12:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:12:57 np0005481680 python3.9[215493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:12:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:12:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:12:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:57 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:12:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:12:58.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:12:58 np0005481680 python3.9[215640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303576.7559767-2285-99063225531686/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:12:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:12:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:12:58 np0005481680 python3.9[215792]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:12:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:12:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:12:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:12:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:12:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211259 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:12:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:12:59 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:00 np0005481680 python3.9[215949]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 12 17:13:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:13:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:01.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:01 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:02] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:13:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:02] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:13:02 np0005481680 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 12 17:13:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:02.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:02 np0005481680 python3.9[216107]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:13:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:03 np0005481680 python3.9[216259]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:13:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:13:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:03 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:03 np0005481680 podman[216385]: 2025-10-12 21:13:03.951871992 +0000 UTC m=+0.149843230 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:13:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:04.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:04 np0005481680 python3.9[216425]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:13:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:05 np0005481680 python3.9[216591]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:05.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:05 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:05 np0005481680 python3.9[216745]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:06.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:06 np0005481680 python3.9[216897]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:07.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:13:07 np0005481680 python3.9[217050]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:07.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:07 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:08.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:08 np0005481680 python3.9[217203]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:09 np0005481680 python3.9[217355]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:09 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:10 np0005481680 python3.9[217509]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200013f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:11 np0005481680 python3.9[217663]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:13:11 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:11 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:11 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:11 np0005481680 systemd[1]: Starting libvirt logging daemon socket...
Oct 12 17:13:11 np0005481680 systemd[1]: Listening on libvirt logging daemon socket.
Oct 12 17:13:11 np0005481680 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 12 17:13:11 np0005481680 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 12 17:13:11 np0005481680 systemd[1]: Starting libvirt logging daemon...
Oct 12 17:13:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:11 np0005481680 systemd[1]: Started libvirt logging daemon.
Oct 12 17:13:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:11 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:12] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:13:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:12] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:13:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:12.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:13:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:12 np0005481680 python3.9[217858]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:13:12 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:12 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:12 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:13 np0005481680 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 12 17:13:13 np0005481680 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 12 17:13:13 np0005481680 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 12 17:13:13 np0005481680 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 12 17:13:13 np0005481680 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 12 17:13:13 np0005481680 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 12 17:13:13 np0005481680 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 12 17:13:13 np0005481680 systemd[1]: Starting libvirt nodedev daemon...
Oct 12 17:13:13 np0005481680 systemd[1]: Started libvirt nodedev daemon.
Oct 12 17:13:13 np0005481680 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 12 17:13:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:13 np0005481680 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 12 17:13:13 np0005481680 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 12 17:13:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:13 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200013f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:14 np0005481680 python3.9[218084]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:13:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:14.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:14 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:14 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:14 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:14 np0005481680 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 12 17:13:14 np0005481680 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 12 17:13:14 np0005481680 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 12 17:13:14 np0005481680 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 12 17:13:14 np0005481680 systemd[1]: Starting libvirt proxy daemon...
Oct 12 17:13:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Oct 12 17:13:14 np0005481680 systemd[1]: Started libvirt proxy daemon.
Oct 12 17:13:14 np0005481680 setroubleshoot[217896]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d35f79f4-c651-4b29-95ae-3f5250d4733a
Oct 12 17:13:14 np0005481680 setroubleshoot[217896]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 12 17:13:14 np0005481680 setroubleshoot[217896]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d35f79f4-c651-4b29-95ae-3f5250d4733a
Oct 12 17:13:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:14 np0005481680 setroubleshoot[217896]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 12 17:13:15 np0005481680 python3.9[218297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:13:15 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:15.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:15 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:15 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:15 np0005481680 systemd[1]: Listening on libvirt locking daemon socket.
Oct 12 17:13:15 np0005481680 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 12 17:13:15 np0005481680 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 12 17:13:15 np0005481680 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 12 17:13:16 np0005481680 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 12 17:13:16 np0005481680 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 12 17:13:16 np0005481680 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 12 17:13:16 np0005481680 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 12 17:13:16 np0005481680 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 12 17:13:16 np0005481680 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 12 17:13:16 np0005481680 systemd[1]: Starting libvirt QEMU daemon...
Oct 12 17:13:16 np0005481680 systemd[1]: Started libvirt QEMU daemon.
Oct 12 17:13:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:16.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:13:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:17 np0005481680 python3.9[218511]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:13:17 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:17.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:13:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:17.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:13:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211317 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:13:17 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:17 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:17 np0005481680 systemd[1]: Starting libvirt secret daemon socket...
Oct 12 17:13:17 np0005481680 systemd[1]: Listening on libvirt secret daemon socket.
Oct 12 17:13:17 np0005481680 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 12 17:13:17 np0005481680 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 12 17:13:17 np0005481680 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 12 17:13:17 np0005481680 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 12 17:13:17 np0005481680 systemd[1]: Starting libvirt secret daemon...
Oct 12 17:13:17 np0005481680 systemd[1]: Started libvirt secret daemon.
Oct 12 17:13:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:17.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:17 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:13:18
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.nfs', 'backups', 'vms', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.data', '.mgr']
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:13:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:13:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:13:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:13:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:13:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:13:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:13:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:13:18.346 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:13:18 np0005481680 python3.9[218747]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:13:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:19 np0005481680 python3.9[218900]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 17:13:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:19.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:19 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:20 np0005481680 python3.9[219053]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:13:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:21 np0005481680 podman[219162]: 2025-10-12 21:13:21.129208281 +0000 UTC m=+0.082795773 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 12 17:13:21 np0005481680 python3.9[219228]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 17:13:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:21.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:21 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:22 np0005481680 python3.9[219379]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:13:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:23 np0005481680 python3.9[219500]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303601.8115518-3359-248275611940676/.source.xml follow=False _original_basename=secret.xml.j2 checksum=2efab9d30b43fdf142cfafb686c11bf3a7a728ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:23 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:23 np0005481680 python3.9[219654]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 5adb8c35-1b74-5730-a252-62321f654cd5#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:13:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:24 np0005481680 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 12 17:13:24 np0005481680 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 12 17:13:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:25 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:13:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:25 np0005481680 python3.9[219817]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:25 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:13:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:27.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:13:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:27 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:28.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:13:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:13:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:13:28 np0005481680 python3.9[220283]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:29 np0005481680 python3.9[220436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:29 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:30 np0005481680 python3.9[220560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303608.8961294-3524-86589109785401/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:13:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:31 np0005481680 python3.9[220712]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:31 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:13:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:31 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:32 np0005481680 python3.9[220866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:13:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:32 np0005481680 python3.9[220944]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:13:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:13:33 np0005481680 python3.9[221097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:33 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:34 np0005481680 python3.9[221176]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ijyn_oxm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:34 np0005481680 podman[221177]: 2025-10-12 21:13:34.155876023 +0000 UTC m=+0.115382043 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:13:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:13:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00002160 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:34 np0005481680 python3.9[221354]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:35 np0005481680 python3.9[221433]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:35 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:36 np0005481680 python3.9[221586]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:13:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:37.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:13:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211337 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:13:37 np0005481680 python3[221740]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 12 17:13:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:37.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:37 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:38 np0005481680 python3.9[221968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:13:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:13:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:13:38 np0005481680 python3.9[222076]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.689771094 +0000 UTC m=+0.071888621 container create 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:13:39 np0005481680 systemd[1]: Started libpod-conmon-8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6.scope.
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.658628398 +0000 UTC m=+0.040745975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.803255318 +0000 UTC m=+0.185372865 container init 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.814891668 +0000 UTC m=+0.197009195 container start 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:13:39 np0005481680 nice_colden[222338]: 167 167
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.821487122 +0000 UTC m=+0.203604669 container attach 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:13:39 np0005481680 systemd[1]: libpod-8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6.scope: Deactivated successfully.
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.823967594 +0000 UTC m=+0.206085131 container died 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:13:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0d65180d82509411387a5c121acb8bf6b230867f8bb1b80385e95673d5af84a9-merged.mount: Deactivated successfully.
Oct 12 17:13:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:39 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:39 np0005481680 podman[222293]: 2025-10-12 21:13:39.887534206 +0000 UTC m=+0.269651733 container remove 8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:13:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:13:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:39 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:13:39 np0005481680 systemd[1]: libpod-conmon-8e9e2afcf75eac47eaf69b207069b22b0764925ac8936f1ee33472507d34cff6.scope: Deactivated successfully.
Oct 12 17:13:39 np0005481680 python3.9[222340]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.162440397 +0000 UTC m=+0.075233452 container create 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:13:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:40 np0005481680 systemd[1]: Started libpod-conmon-3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13.scope.
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.130335938 +0000 UTC m=+0.043129043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.269213315 +0000 UTC m=+0.182006420 container init 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.283395008 +0000 UTC m=+0.196188053 container start 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.288222918 +0000 UTC m=+0.201016033 container attach 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:13:40 np0005481680 python3.9[222461]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:13:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:40 np0005481680 sad_engelbart[222434]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:13:40 np0005481680 sad_engelbart[222434]: --> All data devices are unavailable
Oct 12 17:13:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:40 np0005481680 systemd[1]: libpod-3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13.scope: Deactivated successfully.
Oct 12 17:13:40 np0005481680 conmon[222434]: conmon 3a4e8f7fc5a9a9954f2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13.scope/container/memory.events
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.695825722 +0000 UTC m=+0.608618777 container died 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:13:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-06ee543cf701b10cdb709d247e8fd43b37550c5ca44a9d86ec07d8dfe1468371-merged.mount: Deactivated successfully.
Oct 12 17:13:40 np0005481680 podman[222388]: 2025-10-12 21:13:40.758527143 +0000 UTC m=+0.671320188 container remove 3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_engelbart, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:13:40 np0005481680 systemd[1]: libpod-conmon-3a4e8f7fc5a9a9954f2bf047b476f463c9ae08b0371a02c6cd4c81ab18a23c13.scope: Deactivated successfully.
Oct 12 17:13:41 np0005481680 python3.9[222687]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.521595905 +0000 UTC m=+0.065063780 container create da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:13:41 np0005481680 systemd[1]: Started libpod-conmon-da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0.scope.
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.492114901 +0000 UTC m=+0.035582816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:41 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.645380115 +0000 UTC m=+0.188848000 container init da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.657956398 +0000 UTC m=+0.201424263 container start da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.662287497 +0000 UTC m=+0.205755362 container attach da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:13:41 np0005481680 optimistic_boyd[222778]: 167 167
Oct 12 17:13:41 np0005481680 systemd[1]: libpod-da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0.scope: Deactivated successfully.
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.667213489 +0000 UTC m=+0.210681364 container died da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 17:13:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4df8ad0c92656f07569b0798724812a71dab4819c7d1d5544d104f07ff86ab15-merged.mount: Deactivated successfully.
Oct 12 17:13:41 np0005481680 podman[222741]: 2025-10-12 21:13:41.723143691 +0000 UTC m=+0.266611566 container remove da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_boyd, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 17:13:41 np0005481680 systemd[1]: libpod-conmon-da1bdd3db54ec4ea9412bc36ab6684ee4542feb7d889cbe2021e889e759ce0c0.scope: Deactivated successfully.
Oct 12 17:13:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:41 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:41 np0005481680 podman[222849]: 2025-10-12 21:13:41.970815345 +0000 UTC m=+0.070114736 container create f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:13:41 np0005481680 python3.9[222843]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:13:42 np0005481680 systemd[1]: Started libpod-conmon-f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938.scope.
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:41.94408589 +0000 UTC m=+0.043385341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c55f4c419ad414fbe788129cf28de6b146cd45ecd4ab783b81de94ef296d0bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c55f4c419ad414fbe788129cf28de6b146cd45ecd4ab783b81de94ef296d0bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c55f4c419ad414fbe788129cf28de6b146cd45ecd4ab783b81de94ef296d0bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:42 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c55f4c419ad414fbe788129cf28de6b146cd45ecd4ab783b81de94ef296d0bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:42.091393456 +0000 UTC m=+0.190692887 container init f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:42.103513128 +0000 UTC m=+0.202812499 container start f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:42.107682252 +0000 UTC m=+0.206981703 container attach f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:13:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:42 np0005481680 stoic_curie[222866]: {
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:    "0": [
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:        {
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "devices": [
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "/dev/loop3"
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            ],
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "lv_name": "ceph_lv0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "lv_size": "21470642176",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "name": "ceph_lv0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "tags": {
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.cluster_name": "ceph",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.crush_device_class": "",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.encrypted": "0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.osd_id": "0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.type": "block",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.vdo": "0",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:                "ceph.with_tpm": "0"
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            },
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "type": "block",
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:            "vg_name": "ceph_vg0"
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:        }
Oct 12 17:13:42 np0005481680 stoic_curie[222866]:    ]
Oct 12 17:13:42 np0005481680 stoic_curie[222866]: }
Oct 12 17:13:42 np0005481680 systemd[1]: libpod-f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938.scope: Deactivated successfully.
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:42.47888703 +0000 UTC m=+0.578186431 container died f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:13:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4c55f4c419ad414fbe788129cf28de6b146cd45ecd4ab783b81de94ef296d0bb-merged.mount: Deactivated successfully.
Oct 12 17:13:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:13:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:42 np0005481680 podman[222849]: 2025-10-12 21:13:42.735419625 +0000 UTC m=+0.834719016 container remove f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_curie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:13:42 np0005481680 systemd[1]: libpod-conmon-f5d9be060cacd3dd02f1c57558a973b387a632a52473a8eb05420677499b3938.scope: Deactivated successfully.
Oct 12 17:13:42 np0005481680 python3.9[223037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.505273936 +0000 UTC m=+0.041907564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:43 np0005481680 python3.9[223201]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.623221081 +0000 UTC m=+0.159854689 container create e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:13:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:43 np0005481680 systemd[1]: Started libpod-conmon-e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa.scope.
Oct 12 17:13:43 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.800321519 +0000 UTC m=+0.336955197 container init e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.812047801 +0000 UTC m=+0.348681419 container start e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:13:43 np0005481680 suspicious_bhaskara[223249]: 167 167
Oct 12 17:13:43 np0005481680 systemd[1]: libpod-e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa.scope: Deactivated successfully.
Oct 12 17:13:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:43 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.888873103 +0000 UTC m=+0.425506721 container attach e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:13:43 np0005481680 podman[223208]: 2025-10-12 21:13:43.889552769 +0000 UTC m=+0.426186387 container died e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:13:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bcf03ae95945821a13b6922caaf8b246b28bf5e725e33382bc96ae9ea44cc0fe-merged.mount: Deactivated successfully.
Oct 12 17:13:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:44 np0005481680 podman[223208]: 2025-10-12 21:13:44.172744178 +0000 UTC m=+0.709377786 container remove e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_bhaskara, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:13:44 np0005481680 systemd[1]: libpod-conmon-e3f4fa521eb8b67e1e50384b9f8e891f7621baf04862043abb4fe6ffef1999aa.scope: Deactivated successfully.
Oct 12 17:13:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:44 np0005481680 podman[223400]: 2025-10-12 21:13:44.460035518 +0000 UTC m=+0.088602877 container create b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:13:44 np0005481680 python3.9[223394]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:44 np0005481680 podman[223400]: 2025-10-12 21:13:44.414792082 +0000 UTC m=+0.043359441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:13:44 np0005481680 systemd[1]: Started libpod-conmon-b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8.scope.
Oct 12 17:13:44 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:13:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ab8d7f9cd97e83f6ec58db0b3ca4df2c1dde52f3c4668bf737a047485c060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ab8d7f9cd97e83f6ec58db0b3ca4df2c1dde52f3c4668bf737a047485c060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ab8d7f9cd97e83f6ec58db0b3ca4df2c1dde52f3c4668bf737a047485c060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:44 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ab8d7f9cd97e83f6ec58db0b3ca4df2c1dde52f3c4668bf737a047485c060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:13:44 np0005481680 podman[223400]: 2025-10-12 21:13:44.58186797 +0000 UTC m=+0.210435379 container init b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:13:44 np0005481680 podman[223400]: 2025-10-12 21:13:44.594083944 +0000 UTC m=+0.222651313 container start b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:13:44 np0005481680 podman[223400]: 2025-10-12 21:13:44.59793788 +0000 UTC m=+0.226505239 container attach b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:13:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:13:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:45 np0005481680 python3.9[223574]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760303623.8055875-3899-128421081138504/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:45 np0005481680 lvm[223642]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:13:45 np0005481680 lvm[223642]: VG ceph_vg0 finished
Oct 12 17:13:45 np0005481680 priceless_ptolemy[223418]: {}
Oct 12 17:13:45 np0005481680 systemd[1]: libpod-b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8.scope: Deactivated successfully.
Oct 12 17:13:45 np0005481680 podman[223400]: 2025-10-12 21:13:45.447700709 +0000 UTC m=+1.076268048 container died b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:13:45 np0005481680 systemd[1]: libpod-b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8.scope: Consumed 1.435s CPU time.
Oct 12 17:13:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-406ab8d7f9cd97e83f6ec58db0b3ca4df2c1dde52f3c4668bf737a047485c060-merged.mount: Deactivated successfully.
Oct 12 17:13:45 np0005481680 podman[223400]: 2025-10-12 21:13:45.505752575 +0000 UTC m=+1.134319914 container remove b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:13:45 np0005481680 systemd[1]: libpod-conmon-b1757e2b22a781bc254656a0f995f2fbc7695a69a2380a7122f9318dbda068a8.scope: Deactivated successfully.
Oct 12 17:13:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:13:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:13:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:45.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:45 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:46 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:46 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:13:46 np0005481680 python3.9[223810]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:47 np0005481680 python3.9[223962]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:47.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:13:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:47.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:47 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:48 np0005481680 python3.9[224119]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:13:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:13:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:49 np0005481680 python3.9[224271]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:49 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:50 np0005481680 python3.9[224426]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:13:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:13:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:51 np0005481680 python3.9[224580]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:13:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:13:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:51.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:13:51 np0005481680 podman[224709]: 2025-10-12 21:13:51.817595957 +0000 UTC m=+0.095080818 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:13:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:51 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20003b20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:52] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:13:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:13:52] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:13:52 np0005481680 python3.9[224754]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:52.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:13:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:52 np0005481680 python3.9[224911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:53 np0005481680 python3.9[225035]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303632.243983-4115-109699158688057/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:53 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:54.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:54 np0005481680 python3.9[225189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:13:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:55 np0005481680 python3.9[225312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303633.8101099-4160-225075644200078/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:13:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:55.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:55 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:55 np0005481680 python3.9[225466]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:13:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:13:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:13:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:13:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:56 np0005481680 python3.9[225589]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303635.3894765-4205-212425461211208/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:13:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:57.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:13:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:13:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:13:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000048s ======
Oct 12 17:13:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Oct 12 17:13:57 np0005481680 python3.9[225742]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:13:57 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:57 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:57 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:57 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:13:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:13:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:13:58 np0005481680 systemd[1]: Reached target edpm_libvirt.target.
Oct 12 17:13:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:13:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:13:59 np0005481680 python3.9[225960]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 12 17:13:59 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:59 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:59 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:13:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.003000072s ======
Oct 12 17:13:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:13:59.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Oct 12 17:13:59 np0005481680 systemd[1]: Reloading.
Oct 12 17:13:59 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:13:59 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:13:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:13:59 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c002190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:00.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:00 np0005481680 systemd[1]: session-54.scope: Deactivated successfully.
Oct 12 17:14:00 np0005481680 systemd[1]: session-54.scope: Consumed 4min 19.514s CPU time.
Oct 12 17:14:00 np0005481680 systemd-logind[783]: Session 54 logged out. Waiting for processes to exit.
Oct 12 17:14:00 np0005481680 systemd-logind[783]: Removed session 54.
Oct 12 17:14:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:01 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:02] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:14:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:02] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:14:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:02.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c0032d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:14:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:14:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:03 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e14002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:04.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:14:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c003450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:05 np0005481680 podman[226065]: 2025-10-12 21:14:05.245370353 +0000 UTC m=+0.195665661 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 12 17:14:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:05 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:06 np0005481680 systemd-logind[783]: New session 55 of user zuul.
Oct 12 17:14:06 np0005481680 systemd[1]: Started Session 55 of User zuul.
Oct 12 17:14:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:06.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:07.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:07 np0005481680 python3.9[226246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:14:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:07 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:08.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:08 np0005481680 python3.9[226404]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:09 np0005481680 python3.9[226557]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:09 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:10.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:10 np0005481680 python3.9[226710]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:11 np0005481680 python3.9[226862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 12 17:14:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:11 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:12] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:14:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:12] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:14:12 np0005481680 python3.9[227016]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:12.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:13 np0005481680 python3.9[227168]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:14:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:13 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:14.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:14 np0005481680 python3.9[227324]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:14:14 np0005481680 systemd[1]: Reloading.
Oct 12 17:14:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:14:14 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:14:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:14 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:14:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:15.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:15 np0005481680 python3.9[227516]: ansible-ansible.builtin.service_facts Invoked
Oct 12 17:14:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:15 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:15 np0005481680 network[227533]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:14:15 np0005481680 network[227534]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:14:15 np0005481680 network[227535]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:14:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:16.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:17.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:17 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:14:18
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.nfs', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', '.rgw.root']
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:14:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:14:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:14:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:14:18.347 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:14:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:14:18.347 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:14:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:14:18.348 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:14:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:14:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4193 writes, 18K keys, 4193 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4193 writes, 4193 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1441 writes, 5638 keys, 1441 commit groups, 1.0 writes per commit group, ingest: 10.78 MB, 0.02 MB/s#012Interval WAL: 1441 writes, 1441 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    149.7      0.19              0.08         8    0.024       0      0       0.0       0.0#012  L6      1/0   11.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    182.7    152.1      0.56              0.27         7    0.080     32K   3804       0.0       0.0#012 Sum      1/0   11.47 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    136.1    151.5      0.75              0.35        15    0.050     32K   3804       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.1    159.1    157.4      0.22              0.10         4    0.055     11K   1521       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    182.7    152.1      0.56              0.27         7    0.080     32K   3804       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    160.9      0.18              0.08         7    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.028, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 0.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562cd3961350#2 capacity: 304.00 MB usage: 5.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(331,5.16 MB,1.69869%) FilterBlock(16,99.80 KB,0.0320585%) IndexBlock(16,192.30 KB,0.061773%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 17:14:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:19.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:19 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.655711) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660655768, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1101, "num_deletes": 254, "total_data_size": 1966781, "memory_usage": 1987040, "flush_reason": "Manual Compaction"}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660671305, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1927729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17945, "largest_seqno": 19045, "table_properties": {"data_size": 1922473, "index_size": 2716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10566, "raw_average_key_size": 18, "raw_value_size": 1912064, "raw_average_value_size": 3348, "num_data_blocks": 122, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303555, "oldest_key_time": 1760303555, "file_creation_time": 1760303660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 15652 microseconds, and 8062 cpu microseconds.
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.671363) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1927729 bytes OK
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.671386) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.674730) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.674753) EVENT_LOG_v1 {"time_micros": 1760303660674746, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.674774) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1961856, prev total WAL file size 1961856, number of live WAL files 2.
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.675841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1882KB)], [38(11MB)]
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660675908, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13958003, "oldest_snapshot_seqno": -1}
Oct 12 17:14:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4989 keys, 13487211 bytes, temperature: kUnknown
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660766721, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13487211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13452420, "index_size": 21241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126642, "raw_average_key_size": 25, "raw_value_size": 13360553, "raw_average_value_size": 2678, "num_data_blocks": 874, "num_entries": 4989, "num_filter_entries": 4989, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.767350) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13487211 bytes
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.770828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.0 rd, 147.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(14.2) write-amplify(7.0) OK, records in: 5511, records dropped: 522 output_compression: NoCompression
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.770858) EVENT_LOG_v1 {"time_micros": 1760303660770845, "job": 18, "event": "compaction_finished", "compaction_time_micros": 91245, "compaction_time_cpu_micros": 45001, "output_level": 6, "num_output_files": 1, "total_output_size": 13487211, "num_input_records": 5511, "num_output_records": 4989, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.675742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.771529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.771537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.771540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.771543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:14:20.771546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660772841, "job": 0, "event": "table_file_deletion", "file_number": 40}
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:14:20 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303660777096, "job": 0, "event": "table_file_deletion", "file_number": 38}
Oct 12 17:14:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:21.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:21 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:22] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:22 np0005481680 podman[227812]: 2025-10-12 21:14:22.139697526 +0000 UTC m=+0.094245416 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 12 17:14:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:22 np0005481680 python3.9[227857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:14:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:23 np0005481680 systemd[1]: Reloading.
Oct 12 17:14:23 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:14:23 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:14:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:23.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:23 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:14:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:24 np0005481680 python3.9[228048]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:14:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:25 np0005481680 python3.9[228202]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 12 17:14:25 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:14:25 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:14:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:25 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:26.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:27.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:27 np0005481680 podman[228217]: 2025-10-12 21:14:27.183742695 +0000 UTC m=+1.224062846 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 12 17:14:27 np0005481680 podman[228275]: 2025-10-12 21:14:27.423773428 +0000 UTC m=+0.071319046 container create 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.4752] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 12 17:14:27 np0005481680 podman[228275]: 2025-10-12 21:14:27.394163282 +0000 UTC m=+0.041708960 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 12 17:14:27 np0005481680 kernel: podman0: port 1(veth0) entered blocking state
Oct 12 17:14:27 np0005481680 kernel: podman0: port 1(veth0) entered disabled state
Oct 12 17:14:27 np0005481680 kernel: veth0: entered allmulticast mode
Oct 12 17:14:27 np0005481680 kernel: veth0: entered promiscuous mode
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5057] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 12 17:14:27 np0005481680 kernel: podman0: port 1(veth0) entered blocking state
Oct 12 17:14:27 np0005481680 kernel: podman0: port 1(veth0) entered forwarding state
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5099] device (veth0): carrier: link connected
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5107] device (podman0): carrier: link connected
Oct 12 17:14:27 np0005481680 systemd-udevd[228301]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:14:27 np0005481680 systemd-udevd[228303]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5597] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5612] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5626] device (podman0): Activation: starting connection 'podman0' (35cd8a6f-0bc6-4930-b04b-78a4bc9ff959)
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5634] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5638] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5644] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.5650] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 12 17:14:27 np0005481680 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.6098] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.6101] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 12 17:14:27 np0005481680 NetworkManager[44859]: <info>  [1760303667.6114] device (podman0): Activation: successful, device activated.
Oct 12 17:14:27 np0005481680 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 12 17:14:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:27 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:27 np0005481680 systemd[1]: Started libpod-conmon-91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75.scope.
Oct 12 17:14:28 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:28 np0005481680 podman[228275]: 2025-10-12 21:14:28.01930935 +0000 UTC m=+0.666855028 container init 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 12 17:14:28 np0005481680 podman[228275]: 2025-10-12 21:14:28.033801061 +0000 UTC m=+0.681346679 container start 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 12 17:14:28 np0005481680 podman[228275]: 2025-10-12 21:14:28.0381667 +0000 UTC m=+0.685712368 container attach 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 12 17:14:28 np0005481680 iscsid_config[228432]: iqn.1994-05.com.redhat:1c47ca9ad776#015
Oct 12 17:14:28 np0005481680 systemd[1]: libpod-91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75.scope: Deactivated successfully.
Oct 12 17:14:28 np0005481680 podman[228275]: 2025-10-12 21:14:28.042744434 +0000 UTC m=+0.690290082 container died 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:14:28 np0005481680 kernel: podman0: port 1(veth0) entered disabled state
Oct 12 17:14:28 np0005481680 kernel: veth0 (unregistering): left allmulticast mode
Oct 12 17:14:28 np0005481680 kernel: veth0 (unregistering): left promiscuous mode
Oct 12 17:14:28 np0005481680 kernel: podman0: port 1(veth0) entered disabled state
Oct 12 17:14:28 np0005481680 NetworkManager[44859]: <info>  [1760303668.1124] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:14:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:28 np0005481680 systemd[1]: run-netns-netns\x2da5aec49a\x2d0e28\x2d05a4\x2d2161\x2df7b770cc4109.mount: Deactivated successfully.
Oct 12 17:14:28 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75-userdata-shm.mount: Deactivated successfully.
Oct 12 17:14:28 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a3926131ee0c2cfb1961be63a9725164b0bb4c660e11e272fcdc295971ed624c-merged.mount: Deactivated successfully.
Oct 12 17:14:28 np0005481680 podman[228275]: 2025-10-12 21:14:28.55689857 +0000 UTC m=+1.204444188 container remove 91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 12 17:14:28 np0005481680 systemd[1]: libpod-conmon-91c18d87a4cda60b41d1f32e60abfa62985a7d9cafbf8b9d6233531fdd0f6a75.scope: Deactivated successfully.
Oct 12 17:14:28 np0005481680 python3.9[228202]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 12 17:14:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:28 np0005481680 python3.9[228202]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 12 17:14:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:29 np0005481680 python3.9[228680]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:29.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:29 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:30.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:30 np0005481680 python3.9[228804]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303668.977825-317-215958544576225/.source.iscsi _original_basename=.dip1er5h follow=False checksum=e56dbac86c14cbc224194e3a5f1a0ab904ae37a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:31 np0005481680 python3.9[228957]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:31 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:32] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:32 np0005481680 python3.9[229108]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:14:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:14:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:14:33 np0005481680 python3.9[229263]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:33 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:34.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:34 np0005481680 python3.9[229416]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:14:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:35 np0005481680 python3.9[229568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:35 np0005481680 podman[229619]: 2025-10-12 21:14:35.657250677 +0000 UTC m=+0.172730610 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 12 17:14:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:35 np0005481680 python3.9[229666]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:35 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:36.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:36 np0005481680 python3.9[229826]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:37.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:37 np0005481680 python3.9[229904]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:37.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:37 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:38 np0005481680 python3.9[230058]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:38 np0005481680 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 12 17:14:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:38 np0005481680 python3.9[230235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:39 np0005481680 python3.9[230314]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:39.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211439 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:14:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:39 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:40 np0005481680 python3.9[230467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:40 np0005481680 python3.9[230545]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:41.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:41 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:42] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:14:42 np0005481680 python3.9[230699]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:14:42 np0005481680 systemd[1]: Reloading.
Oct 12 17:14:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:42 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:14:42 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:14:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:43.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:43 np0005481680 python3.9[230889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:43 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:44 np0005481680 python3.9[230967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:14:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:45 np0005481680 python3.9[231120]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:45 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:45 np0005481680 python3.9[231199]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:46.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:14:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:14:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:14:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:47.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:47 np0005481680 python3.9[231432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:14:47 np0005481680 systemd[1]: Reloading.
Oct 12 17:14:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:14:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:14:47 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:14:47 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:14:47 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 17:14:47 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 17:14:47 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 17:14:47 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.67452521 +0000 UTC m=+0.082557543 container create f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:47 np0005481680 systemd[1]: Started libpod-conmon-f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08.scope.
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.641403603 +0000 UTC m=+0.049436026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.77884144 +0000 UTC m=+0.186873853 container init f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.790448557 +0000 UTC m=+0.198480920 container start f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.794574592 +0000 UTC m=+0.202606945 container attach f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:14:47 np0005481680 sad_blackburn[231609]: 167 167
Oct 12 17:14:47 np0005481680 systemd[1]: libpod-f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08.scope: Deactivated successfully.
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.800933255 +0000 UTC m=+0.208965608 container died f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:14:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-afb5d0396c6ce4fea9999846990afdeb2e3da50fccd0c6dff167d50415551cba-merged.mount: Deactivated successfully.
Oct 12 17:14:47 np0005481680 podman[231567]: 2025-10-12 21:14:47.855931172 +0000 UTC m=+0.263963525 container remove f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:14:47 np0005481680 systemd[1]: libpod-conmon-f3afed6c6cccc2642ab70622586ebd7bcc61922fecfc810adb8a7f54bf36ed08.scope: Deactivated successfully.
Oct 12 17:14:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:47 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.072444412 +0000 UTC m=+0.052579166 container create f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 17:14:48 np0005481680 systemd[1]: Started libpod-conmon-f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e.scope.
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.046786735 +0000 UTC m=+0.026921559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.17125438 +0000 UTC m=+0.151389134 container init f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.183208236 +0000 UTC m=+0.163343020 container start f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.187881966 +0000 UTC m=+0.168016740 container attach f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:48.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:14:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:14:48 np0005481680 vibrant_edison[231693]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:14:48 np0005481680 vibrant_edison[231693]: --> All data devices are unavailable
Oct 12 17:14:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:14:48 np0005481680 python3.9[231785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:48 np0005481680 systemd[1]: libpod-f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e.scope: Deactivated successfully.
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.637840908 +0000 UTC m=+0.617975712 container died f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 12 17:14:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-58c5b363ae1cf63e271e85aa2322c6133dea90c20a704c679a34fe9f9fa8ee1b-merged.mount: Deactivated successfully.
Oct 12 17:14:48 np0005481680 podman[231635]: 2025-10-12 21:14:48.714221763 +0000 UTC m=+0.694356537 container remove f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 12 17:14:48 np0005481680 systemd[1]: libpod-conmon-f3c2a14e5e01297af663719649f772393e73325f1aad1f7d4430c4060ee63b9e.scope: Deactivated successfully.
Oct 12 17:14:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004030 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.438937315 +0000 UTC m=+0.067532759 container create b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:14:49 np0005481680 systemd[1]: Started libpod-conmon-b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2.scope.
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.410678442 +0000 UTC m=+0.039273936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.535860776 +0000 UTC m=+0.164456220 container init b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.546849317 +0000 UTC m=+0.175444731 container start b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.550255524 +0000 UTC m=+0.178850938 container attach b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:14:49 np0005481680 tender_austin[232069]: 167 167
Oct 12 17:14:49 np0005481680 systemd[1]: libpod-b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2.scope: Deactivated successfully.
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.555654882 +0000 UTC m=+0.184250286 container died b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5cbca01f7c8efb68505f0374636c42558b3613dfd920efd582d2c0aeeb65fd41-merged.mount: Deactivated successfully.
Oct 12 17:14:49 np0005481680 podman[232049]: 2025-10-12 21:14:49.60677816 +0000 UTC m=+0.235373604 container remove b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_austin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:49 np0005481680 systemd[1]: libpod-conmon-b7be5e7c3a0492401b357f185e4b5b89a83dbefcf0f0caa6052c523a0dba75e2.scope: Deactivated successfully.
Oct 12 17:14:49 np0005481680 python3.9[232060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:49.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:49 np0005481680 podman[232098]: 2025-10-12 21:14:49.770779867 +0000 UTC m=+0.040719514 container create 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:14:49 np0005481680 systemd[1]: Started libpod-conmon-388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6.scope.
Oct 12 17:14:49 np0005481680 podman[232098]: 2025-10-12 21:14:49.750992569 +0000 UTC m=+0.020932216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ba8bc16969a6fdd099fd1c4c32aff974036f948cd87b74c5accb93411bbd94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ba8bc16969a6fdd099fd1c4c32aff974036f948cd87b74c5accb93411bbd94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ba8bc16969a6fdd099fd1c4c32aff974036f948cd87b74c5accb93411bbd94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ba8bc16969a6fdd099fd1c4c32aff974036f948cd87b74c5accb93411bbd94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:49 np0005481680 podman[232098]: 2025-10-12 21:14:49.86865521 +0000 UTC m=+0.138594907 container init 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:14:49 np0005481680 podman[232098]: 2025-10-12 21:14:49.884551597 +0000 UTC m=+0.154491244 container start 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:14:49 np0005481680 podman[232098]: 2025-10-12 21:14:49.88934443 +0000 UTC m=+0.159284087 container attach 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:49 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:50 np0005481680 zealous_raman[232154]: {
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:    "0": [
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:        {
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "devices": [
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "/dev/loop3"
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            ],
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "lv_name": "ceph_lv0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "lv_size": "21470642176",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "name": "ceph_lv0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "tags": {
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.cluster_name": "ceph",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.crush_device_class": "",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.encrypted": "0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.osd_id": "0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.type": "block",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.vdo": "0",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:                "ceph.with_tpm": "0"
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            },
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "type": "block",
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:            "vg_name": "ceph_vg0"
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:        }
Oct 12 17:14:50 np0005481680 zealous_raman[232154]:    ]
Oct 12 17:14:50 np0005481680 zealous_raman[232154]: }
Oct 12 17:14:50 np0005481680 systemd[1]: libpod-388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6.scope: Deactivated successfully.
Oct 12 17:14:50 np0005481680 podman[232098]: 2025-10-12 21:14:50.206845063 +0000 UTC m=+0.476784700 container died 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:14:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:50 np0005481680 python3.9[232234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303688.9473128-779-157432743225944/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-36ba8bc16969a6fdd099fd1c4c32aff974036f948cd87b74c5accb93411bbd94-merged.mount: Deactivated successfully.
Oct 12 17:14:50 np0005481680 podman[232098]: 2025-10-12 21:14:50.285705291 +0000 UTC m=+0.555644918 container remove 388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:14:50 np0005481680 systemd[1]: libpod-conmon-388a75b1b1d7a50d60cc312fae65ca0ad7b6e1e01c89e6de5c1bde73b3fb6bd6.scope: Deactivated successfully.
Oct 12 17:14:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:14:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.139301792 +0000 UTC m=+0.056242120 container create 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:51 np0005481680 systemd[1]: Started libpod-conmon-818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae.scope.
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.115475072 +0000 UTC m=+0.032415400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.253788621 +0000 UTC m=+0.170728999 container init 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.269911533 +0000 UTC m=+0.186851851 container start 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.274904251 +0000 UTC m=+0.191844609 container attach 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:14:51 np0005481680 unruffled_kare[232513]: 167 167
Oct 12 17:14:51 np0005481680 systemd[1]: libpod-818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae.scope: Deactivated successfully.
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.277313902 +0000 UTC m=+0.194254250 container died 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:14:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-12e21fe623d0be7a87c919ce23a087deea151364f943050bb123c42445e6d9ad-merged.mount: Deactivated successfully.
Oct 12 17:14:51 np0005481680 podman[232468]: 2025-10-12 21:14:51.337940554 +0000 UTC m=+0.254880882 container remove 818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:51 np0005481680 systemd[1]: libpod-conmon-818c52960b239be97348cd87c9a0fc2cff7db8e59e9551e185f0c65c2b1a26ae.scope: Deactivated successfully.
Oct 12 17:14:51 np0005481680 python3.9[232516]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:14:51 np0005481680 podman[232542]: 2025-10-12 21:14:51.622448013 +0000 UTC m=+0.103831117 container create 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:14:51 np0005481680 podman[232542]: 2025-10-12 21:14:51.561418591 +0000 UTC m=+0.042801735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:14:51 np0005481680 systemd[1]: Started libpod-conmon-46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9.scope.
Oct 12 17:14:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:14:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ac428ba7ed61bb56da9d35d04cc6e9522695ab007c7d7b2369aeb5ab25ac13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ac428ba7ed61bb56da9d35d04cc6e9522695ab007c7d7b2369aeb5ab25ac13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ac428ba7ed61bb56da9d35d04cc6e9522695ab007c7d7b2369aeb5ab25ac13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ac428ba7ed61bb56da9d35d04cc6e9522695ab007c7d7b2369aeb5ab25ac13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:14:51 np0005481680 podman[232542]: 2025-10-12 21:14:51.732766776 +0000 UTC m=+0.214149880 container init 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:14:51 np0005481680 podman[232542]: 2025-10-12 21:14:51.747338019 +0000 UTC m=+0.228721093 container start 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:14:51 np0005481680 podman[232542]: 2025-10-12 21:14:51.750961402 +0000 UTC m=+0.232344516 container attach 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:14:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:51.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:51 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:14:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:51 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:14:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:51 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:14:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:14:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:52.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:52 np0005481680 python3.9[232728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:14:52 np0005481680 recursing_dhawan[232603]: {}
Oct 12 17:14:52 np0005481680 lvm[232863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:14:52 np0005481680 lvm[232863]: VG ceph_vg0 finished
Oct 12 17:14:52 np0005481680 systemd[1]: libpod-46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9.scope: Deactivated successfully.
Oct 12 17:14:52 np0005481680 systemd[1]: libpod-46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9.scope: Consumed 1.437s CPU time.
Oct 12 17:14:52 np0005481680 podman[232542]: 2025-10-12 21:14:52.595295125 +0000 UTC m=+1.076678189 container died 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:14:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a7ac428ba7ed61bb56da9d35d04cc6e9522695ab007c7d7b2369aeb5ab25ac13-merged.mount: Deactivated successfully.
Oct 12 17:14:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:14:52 np0005481680 podman[232848]: 2025-10-12 21:14:52.634241022 +0000 UTC m=+0.108719733 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 12 17:14:52 np0005481680 podman[232542]: 2025-10-12 21:14:52.645735316 +0000 UTC m=+1.127118380 container remove 46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_dhawan, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:14:52 np0005481680 systemd[1]: libpod-conmon-46c32478f814c0efe39dc20a05e8ec913eda54e4c606bd5d083e596328ff2ab9.scope: Deactivated successfully.
Oct 12 17:14:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:14:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:14:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:52 np0005481680 python3.9[232939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303691.6877558-854-271220552789979/.source.json _original_basename=.xtfpyq7u follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:14:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:53.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:53 np0005481680 python3.9[233118]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:14:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:53 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:14:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:14:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:14:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:55.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:55 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:14:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:14:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:14:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:56 np0005481680 python3.9[233549]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 12 17:14:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:57.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:14:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:14:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:14:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:57.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:57 np0005481680 python3.9[233703]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:14:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:57 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:14:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:14:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:14:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:14:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:14:58 np0005481680 python3.9[233880]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 12 17:14:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:14:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:14:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:14:59.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:14:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211459 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:14:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:14:59 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:00.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:15:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:01 np0005481680 python3[234060]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:15:01 np0005481680 podman[234098]: 2025-10-12 21:15:01.44747256 +0000 UTC m=+0.069826998 container create 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:15:01 np0005481680 podman[234098]: 2025-10-12 21:15:01.40876316 +0000 UTC m=+0.031117668 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 12 17:15:01 np0005481680 python3[234060]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 12 17:15:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:01.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:01 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:02 np0005481680 python3.9[234288]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:15:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 12 17:15:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:15:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:15:03 np0005481680 python3.9[234444]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:03 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:04 np0005481680 python3.9[234521]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:15:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:04 np0005481680 python3.9[234672]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760303704.2188542-1118-101192511041550/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:05 np0005481680 python3.9[234749]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:15:05 np0005481680 systemd[1]: Reloading.
Oct 12 17:15:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:05.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:05 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:15:05 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:15:05 np0005481680 podman[234752]: 2025-10-12 21:15:05.951860589 +0000 UTC m=+0.167610017 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 12 17:15:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:05 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:15:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:06 np0005481680 python3.9[234884]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:15:06 np0005481680 systemd[1]: Reloading.
Oct 12 17:15:07 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:15:07 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:15:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:07.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:15:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:07.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:15:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:07.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:15:07 np0005481680 systemd[1]: Starting iscsid container...
Oct 12 17:15:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7cd31a83eb828514749c88754e0b49dd3dffa2ab7766cce61a3c9528de6ee3/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7cd31a83eb828514749c88754e0b49dd3dffa2ab7766cce61a3c9528de6ee3/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7cd31a83eb828514749c88754e0b49dd3dffa2ab7766cce61a3c9528de6ee3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:07 np0005481680 systemd[1]: Started /usr/bin/podman healthcheck run 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8.
Oct 12 17:15:07 np0005481680 podman[234926]: 2025-10-12 21:15:07.498496343 +0000 UTC m=+0.163270589 container init 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 12 17:15:07 np0005481680 iscsid[234942]: + sudo -E kolla_set_configs
Oct 12 17:15:07 np0005481680 podman[234926]: 2025-10-12 21:15:07.536971167 +0000 UTC m=+0.201745423 container start 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:15:07 np0005481680 podman[234926]: iscsid
Oct 12 17:15:07 np0005481680 systemd[1]: Started iscsid container.
Oct 12 17:15:07 np0005481680 systemd[1]: Created slice User Slice of UID 0.
Oct 12 17:15:07 np0005481680 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 12 17:15:07 np0005481680 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 12 17:15:07 np0005481680 systemd[1]: Starting User Manager for UID 0...
Oct 12 17:15:07 np0005481680 podman[234949]: 2025-10-12 21:15:07.647406883 +0000 UTC m=+0.088355622 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:15:07 np0005481680 systemd[1]: 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8-440618d6b475d69f.service: Main process exited, code=exited, status=1/FAILURE
Oct 12 17:15:07 np0005481680 systemd[1]: 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8-440618d6b475d69f.service: Failed with result 'exit-code'.
Oct 12 17:15:07 np0005481680 systemd[234963]: Queued start job for default target Main User Target.
Oct 12 17:15:07 np0005481680 systemd[234963]: Created slice User Application Slice.
Oct 12 17:15:07 np0005481680 systemd[234963]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 12 17:15:07 np0005481680 systemd[234963]: Started Daily Cleanup of User's Temporary Directories.
Oct 12 17:15:07 np0005481680 systemd[234963]: Reached target Paths.
Oct 12 17:15:07 np0005481680 systemd[234963]: Reached target Timers.
Oct 12 17:15:07 np0005481680 systemd[234963]: Starting D-Bus User Message Bus Socket...
Oct 12 17:15:07 np0005481680 systemd[234963]: Starting Create User's Volatile Files and Directories...
Oct 12 17:15:07 np0005481680 systemd[234963]: Listening on D-Bus User Message Bus Socket.
Oct 12 17:15:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:07 np0005481680 systemd[234963]: Reached target Sockets.
Oct 12 17:15:07 np0005481680 systemd[234963]: Finished Create User's Volatile Files and Directories.
Oct 12 17:15:07 np0005481680 systemd[234963]: Reached target Basic System.
Oct 12 17:15:07 np0005481680 systemd[234963]: Reached target Main User Target.
Oct 12 17:15:07 np0005481680 systemd[234963]: Startup finished in 139ms.
Oct 12 17:15:07 np0005481680 systemd[1]: Started User Manager for UID 0.
Oct 12 17:15:07 np0005481680 systemd[1]: Started Session c3 of User root.
Oct 12 17:15:07 np0005481680 iscsid[234942]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:15:07 np0005481680 iscsid[234942]: INFO:__main__:Validating config file
Oct 12 17:15:07 np0005481680 iscsid[234942]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:15:07 np0005481680 iscsid[234942]: INFO:__main__:Writing out command to execute
Oct 12 17:15:07 np0005481680 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 12 17:15:07 np0005481680 iscsid[234942]: ++ cat /run_command
Oct 12 17:15:07 np0005481680 iscsid[234942]: + CMD='/usr/sbin/iscsid -f'
Oct 12 17:15:07 np0005481680 iscsid[234942]: + ARGS=
Oct 12 17:15:07 np0005481680 iscsid[234942]: + sudo kolla_copy_cacerts
Oct 12 17:15:07 np0005481680 systemd[1]: Started Session c4 of User root.
Oct 12 17:15:07 np0005481680 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 12 17:15:07 np0005481680 iscsid[234942]: + [[ ! -n '' ]]
Oct 12 17:15:07 np0005481680 iscsid[234942]: + . kolla_extend_start
Oct 12 17:15:07 np0005481680 iscsid[234942]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 12 17:15:07 np0005481680 iscsid[234942]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 12 17:15:07 np0005481680 iscsid[234942]: Running command: '/usr/sbin/iscsid -f'
Oct 12 17:15:07 np0005481680 iscsid[234942]: + umask 0022
Oct 12 17:15:07 np0005481680 iscsid[234942]: + exec /usr/sbin/iscsid -f
Oct 12 17:15:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:07 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:08 np0005481680 kernel: Loading iSCSI transport class v2.0-870.
Oct 12 17:15:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:08 np0005481680 python3.9[235148]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:15:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:09 np0005481680 python3.9[235300]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:09 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:10.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:10 np0005481680 python3.9[235454]: ansible-ansible.builtin.service_facts Invoked
Oct 12 17:15:10 np0005481680 network[235471]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:15:10 np0005481680 network[235472]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:15:10 np0005481680 network[235473]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:15:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:15:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:11 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0040d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:12.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:13 np0005481680 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 12 17:15:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:13.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:13 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c0040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:14 np0005481680 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 12 17:15:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:15 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:16.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:17.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:15:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:17 np0005481680 python3.9[235758]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 12 17:15:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:17 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:17 np0005481680 systemd[1]: Stopping User Manager for UID 0...
Oct 12 17:15:18 np0005481680 systemd[234963]: Activating special unit Exit the Session...
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped target Main User Target.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped target Basic System.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped target Paths.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped target Sockets.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped target Timers.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 12 17:15:18 np0005481680 systemd[234963]: Closed D-Bus User Message Bus Socket.
Oct 12 17:15:18 np0005481680 systemd[234963]: Stopped Create User's Volatile Files and Directories.
Oct 12 17:15:18 np0005481680 systemd[234963]: Removed slice User Application Slice.
Oct 12 17:15:18 np0005481680 systemd[234963]: Reached target Shutdown.
Oct 12 17:15:18 np0005481680 systemd[234963]: Finished Exit the Session.
Oct 12 17:15:18 np0005481680 systemd[234963]: Reached target Exit the Session.
Oct 12 17:15:18 np0005481680 systemd[1]: user@0.service: Deactivated successfully.
Oct 12 17:15:18 np0005481680 systemd[1]: Stopped User Manager for UID 0.
Oct 12 17:15:18 np0005481680 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 12 17:15:18 np0005481680 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 12 17:15:18 np0005481680 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 12 17:15:18 np0005481680 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 12 17:15:18 np0005481680 systemd[1]: Removed slice User Slice of UID 0.
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:15:18
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:15:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:15:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:15:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:15:18.348 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:15:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:15:18.348 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:15:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:15:18.348 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:15:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:18 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:19 np0005481680 python3.9[235936]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 12 17:15:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:19.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:19 np0005481680 python3.9[236094]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:19 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:20 np0005481680 python3.9[236217]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303719.2821176-1340-222086737018583/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:15:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:20 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:21 np0005481680 python3.9[236370]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:21.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:21 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:22] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:22] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:22.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:22 np0005481680 python3.9[236523]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:15:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:22 np0005481680 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 12 17:15:22 np0005481680 systemd[1]: Stopped Load Kernel Modules.
Oct 12 17:15:22 np0005481680 systemd[1]: Stopping Load Kernel Modules...
Oct 12 17:15:22 np0005481680 systemd[1]: Starting Load Kernel Modules...
Oct 12 17:15:22 np0005481680 systemd[1]: Finished Load Kernel Modules.
Oct 12 17:15:22 np0005481680 podman[236527]: 2025-10-12 21:15:22.737637114 +0000 UTC m=+0.059421604 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 12 17:15:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:22 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:23 np0005481680 python3.9[236699]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:23.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:23 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:24 np0005481680 python3.9[236852]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:24 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c004170 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:25 np0005481680 python3.9[237005]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:25 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:26 np0005481680 python3.9[237159]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:26 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:27.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:15:27 np0005481680 python3.9[237282]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303725.8852842-1514-212974819424401/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:27 np0005481680 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 12 17:15:27 np0005481680 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 12 17:15:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:27 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:28.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:28 np0005481680 python3.9[237439]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:15:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:28 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:29 np0005481680 python3.9[237592]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:29.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:29 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:30 np0005481680 python3.9[237746]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Oct 12 17:15:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:30 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:31 np0005481680 python3.9[237898]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:31 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:32] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:32] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:32 np0005481680 python3.9[238052]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:32 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:32 np0005481680 python3.9[238204]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:15:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:15:33 np0005481680 python3.9[238357]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:33.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:33 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:34.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:34 np0005481680 python3.9[238510]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:34 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:35 np0005481680 python3.9[238662]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:15:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:35.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.018559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736018604, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 865, "num_deletes": 251, "total_data_size": 1407075, "memory_usage": 1432000, "flush_reason": "Manual Compaction"}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736029916, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1392433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19046, "largest_seqno": 19910, "table_properties": {"data_size": 1388142, "index_size": 2007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9441, "raw_average_key_size": 19, "raw_value_size": 1379536, "raw_average_value_size": 2844, "num_data_blocks": 89, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303660, "oldest_key_time": 1760303660, "file_creation_time": 1760303736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 11429 microseconds, and 7321 cpu microseconds.
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.029985) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1392433 bytes OK
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.030013) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.032274) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.032297) EVENT_LOG_v1 {"time_micros": 1760303736032290, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.032326) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1402965, prev total WAL file size 1402965, number of live WAL files 2.
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.033203) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1359KB)], [41(12MB)]
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736033256, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 14879644, "oldest_snapshot_seqno": -1}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4958 keys, 12703737 bytes, temperature: kUnknown
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736134248, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12703737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12669785, "index_size": 20466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 126571, "raw_average_key_size": 25, "raw_value_size": 12579014, "raw_average_value_size": 2537, "num_data_blocks": 839, "num_entries": 4958, "num_filter_entries": 4958, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.134584) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12703737 bytes
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.136336) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.9 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(19.8) write-amplify(9.1) OK, records in: 5474, records dropped: 516 output_compression: NoCompression
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.136366) EVENT_LOG_v1 {"time_micros": 1760303736136352, "job": 20, "event": "compaction_finished", "compaction_time_micros": 101109, "compaction_time_cpu_micros": 43427, "output_level": 6, "num_output_files": 1, "total_output_size": 12703737, "num_input_records": 5474, "num_output_records": 4958, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736136954, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303736140985, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.033119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.141044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.141053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.141099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.141105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:15:36.141109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:15:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:36 np0005481680 python3.9[238818]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:36 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:37 np0005481680 podman[238942]: 2025-10-12 21:15:37.02895151 +0000 UTC m=+0.126379839 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:15:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:37.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:15:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:37.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:15:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:37.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:15:37 np0005481680 python3.9[238990]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:37.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:37 np0005481680 podman[239122]: 2025-10-12 21:15:37.824501304 +0000 UTC m=+0.080772288 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:15:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:38 np0005481680 python3.9[239170]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:38 np0005481680 python3.9[239248]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:38 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:39 np0005481680 python3.9[239426]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:39.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:39 np0005481680 python3.9[239505]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:15:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:40 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:41 np0005481680 python3.9[239657]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:42] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:42] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:15:42 np0005481680 python3.9[239811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:42.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:42 np0005481680 python3.9[239889]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:42 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:43 np0005481680 python3.9[240042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:43.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:44 np0005481680 python3.9[240121]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:44.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:44 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:45 np0005481680 python3.9[240273]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:15:45 np0005481680 systemd[1]: Reloading.
Oct 12 17:15:45 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:15:45 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:15:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:45.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:46 np0005481680 python3.9[240463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:46 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:47.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:15:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:47.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:15:47 np0005481680 python3.9[240541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:47.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:48 np0005481680 python3.9[240695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:15:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:15:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:15:48 np0005481680 python3.9[240773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:48 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:49 np0005481680 python3.9[240926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:15:49 np0005481680 systemd[1]: Reloading.
Oct 12 17:15:49 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:15:49 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:15:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:49.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:50 np0005481680 systemd[1]: Starting Create netns directory...
Oct 12 17:15:50 np0005481680 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 12 17:15:50 np0005481680 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 12 17:15:50 np0005481680 systemd[1]: Finished Create netns directory.
Oct 12 17:15:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:50.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:15:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:50 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:51 np0005481680 python3.9[241121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:15:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:15:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:52 np0005481680 python3.9[241274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:52 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:15:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7730 writes, 30K keys, 7730 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7730 writes, 1603 syncs, 4.82 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 695 writes, 1209 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 695 writes, 339 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 12 17:15:53 np0005481680 podman[241369]: 2025-10-12 21:15:53.082175291 +0000 UTC m=+0.093059210 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:15:53 np0005481680 python3.9[241442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303751.635498-2135-246631724261810/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:15:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:15:54 np0005481680 python3.9[241705]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:15:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:15:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:54 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:55 np0005481680 python3.9[241939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.28802962 +0000 UTC m=+0.079383612 container create c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.238180701 +0000 UTC m=+0.029534733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:55 np0005481680 systemd[1]: Started libpod-conmon-c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd.scope.
Oct 12 17:15:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.38975782 +0000 UTC m=+0.181111792 container init c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.396732357 +0000 UTC m=+0.188086349 container start c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:15:55 np0005481680 gallant_moser[242009]: 167 167
Oct 12 17:15:55 np0005481680 systemd[1]: libpod-c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd.scope: Deactivated successfully.
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.406513456 +0000 UTC m=+0.197867438 container attach c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.409392399 +0000 UTC m=+0.200746351 container died c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:15:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e9ee8a97af8828e2b309225c6c8d0630bacc015b71cb8b9f8f42d81a201da022-merged.mount: Deactivated successfully.
Oct 12 17:15:55 np0005481680 podman[241970]: 2025-10-12 21:15:55.481506896 +0000 UTC m=+0.272860858 container remove c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_moser, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 12 17:15:55 np0005481680 systemd[1]: libpod-conmon-c9f547f57e0758ef9fc89bf8ea89b13b0a1a2677c7ffc2ecbc897243d2fa42bd.scope: Deactivated successfully.
Oct 12 17:15:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:15:55 np0005481680 podman[242113]: 2025-10-12 21:15:55.726952394 +0000 UTC m=+0.065249021 container create 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:15:55 np0005481680 systemd[1]: Started libpod-conmon-6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf.scope.
Oct 12 17:15:55 np0005481680 podman[242113]: 2025-10-12 21:15:55.705996531 +0000 UTC m=+0.044293228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:55 np0005481680 podman[242113]: 2025-10-12 21:15:55.861729737 +0000 UTC m=+0.200026444 container init 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:15:55 np0005481680 podman[242113]: 2025-10-12 21:15:55.877636631 +0000 UTC m=+0.215933288 container start 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:15:55 np0005481680 podman[242113]: 2025-10-12 21:15:55.882100765 +0000 UTC m=+0.220397472 container attach 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:15:55 np0005481680 python3.9[242147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303754.6576576-2210-102645032832623/.source.json _original_basename=.krezsfxe follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:56 np0005481680 nice_haslett[242150]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:15:56 np0005481680 nice_haslett[242150]: --> All data devices are unavailable
Oct 12 17:15:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e2c004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:56 np0005481680 systemd[1]: libpod-6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf.scope: Deactivated successfully.
Oct 12 17:15:56 np0005481680 podman[242113]: 2025-10-12 21:15:56.294827493 +0000 UTC m=+0.633124150 container died 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:15:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-db020210f18f70af9481659f7bccd82d5e5809e538c29a1370b73e774c9d54c2-merged.mount: Deactivated successfully.
Oct 12 17:15:56 np0005481680 podman[242113]: 2025-10-12 21:15:56.35956204 +0000 UTC m=+0.697858697 container remove 6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_haslett, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:15:56 np0005481680 systemd[1]: libpod-conmon-6c37ddf72a9787a924fb7c96ed6f137544f9108ff3d6088257db249e283631cf.scope: Deactivated successfully.
Oct 12 17:15:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:56 np0005481680 python3.9[242371]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:15:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:56 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:15:57.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.114153262 +0000 UTC m=+0.065510899 container create 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:15:57 np0005481680 systemd[1]: Started libpod-conmon-4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7.scope.
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.08621939 +0000 UTC m=+0.037577067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.222564401 +0000 UTC m=+0.173922088 container init 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.235254535 +0000 UTC m=+0.186612172 container start 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 17:15:57 np0005481680 systemd[1]: libpod-4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7.scope: Deactivated successfully.
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.245507956 +0000 UTC m=+0.196865593 container attach 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:15:57 np0005481680 fervent_chaplygin[242516]: 167 167
Oct 12 17:15:57 np0005481680 conmon[242516]: conmon 4ff03acc96aacffdc2c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7.scope/container/memory.events
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.250191025 +0000 UTC m=+0.201548672 container died 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:15:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2150c84c2d0566fa6913be0f2c74aff856dbb679641df447a8f72d54e566fc65-merged.mount: Deactivated successfully.
Oct 12 17:15:57 np0005481680 podman[242463]: 2025-10-12 21:15:57.303644456 +0000 UTC m=+0.255002103 container remove 4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_chaplygin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 17:15:57 np0005481680 systemd[1]: libpod-conmon-4ff03acc96aacffdc2c074a58bec1b72bd043c4bcf260ef2fa812cea921b88d7.scope: Deactivated successfully.
Oct 12 17:15:57 np0005481680 podman[242615]: 2025-10-12 21:15:57.575826446 +0000 UTC m=+0.065323115 container create 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:15:57 np0005481680 systemd[1]: Started libpod-conmon-577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16.scope.
Oct 12 17:15:57 np0005481680 podman[242615]: 2025-10-12 21:15:57.556475083 +0000 UTC m=+0.045971742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a279b36b4963ffc6da93b9426e14ed123c8da1984271028f6090d1042e600b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a279b36b4963ffc6da93b9426e14ed123c8da1984271028f6090d1042e600b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a279b36b4963ffc6da93b9426e14ed123c8da1984271028f6090d1042e600b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a279b36b4963ffc6da93b9426e14ed123c8da1984271028f6090d1042e600b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:57 np0005481680 podman[242615]: 2025-10-12 21:15:57.669327407 +0000 UTC m=+0.158824106 container init 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:15:57 np0005481680 podman[242615]: 2025-10-12 21:15:57.679716111 +0000 UTC m=+0.169212770 container start 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:15:57 np0005481680 podman[242615]: 2025-10-12 21:15:57.683030445 +0000 UTC m=+0.172527104 container attach 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 17:15:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:57.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:15:58 np0005481680 quirky_cray[242633]: {
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:    "0": [
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:        {
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "devices": [
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "/dev/loop3"
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            ],
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "lv_name": "ceph_lv0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "lv_size": "21470642176",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "name": "ceph_lv0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "tags": {
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.cluster_name": "ceph",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.crush_device_class": "",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.encrypted": "0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.osd_id": "0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.type": "block",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.vdo": "0",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:                "ceph.with_tpm": "0"
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            },
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "type": "block",
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:            "vg_name": "ceph_vg0"
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:        }
Oct 12 17:15:58 np0005481680 quirky_cray[242633]:    ]
Oct 12 17:15:58 np0005481680 quirky_cray[242633]: }
Oct 12 17:15:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:58 np0005481680 systemd[1]: libpod-577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16.scope: Deactivated successfully.
Oct 12 17:15:58 np0005481680 podman[242615]: 2025-10-12 21:15:58.039679565 +0000 UTC m=+0.529176244 container died 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:15:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7a279b36b4963ffc6da93b9426e14ed123c8da1984271028f6090d1042e600b7-merged.mount: Deactivated successfully.
Oct 12 17:15:58 np0005481680 podman[242615]: 2025-10-12 21:15:58.153159054 +0000 UTC m=+0.642655733 container remove 577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:15:58 np0005481680 systemd[1]: libpod-conmon-577be851864cafe32052904f573cc4260b2b27feb7969f175a05ef81d046fa16.scope: Deactivated successfully.
Oct 12 17:15:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0dfc004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:15:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:15:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:15:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:15:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:15:58 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:15:58 np0005481680 podman[242966]: 2025-10-12 21:15:58.959228386 +0000 UTC m=+0.062613265 container create 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:15:59 np0005481680 systemd[1]: Started libpod-conmon-104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4.scope.
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:58.93073415 +0000 UTC m=+0.034119109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:59.049249848 +0000 UTC m=+0.152634757 container init 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:59.05958221 +0000 UTC m=+0.162967089 container start 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:59.064328292 +0000 UTC m=+0.167713211 container attach 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:15:59 np0005481680 flamboyant_buck[243013]: 167 167
Oct 12 17:15:59 np0005481680 systemd[1]: libpod-104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4.scope: Deactivated successfully.
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:59.066272831 +0000 UTC m=+0.169657720 container died 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 17:15:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cb0637d9935a0d3280840c4ae8d778bd3a710e9a403d7315c33fb9fafe5d3536-merged.mount: Deactivated successfully.
Oct 12 17:15:59 np0005481680 podman[242966]: 2025-10-12 21:15:59.120143633 +0000 UTC m=+0.223528542 container remove 104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_buck, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 12 17:15:59 np0005481680 systemd[1]: libpod-conmon-104f2fdc5e912215c6cca9df8509bc067005031af85bc7a5140265101303eed4.scope: Deactivated successfully.
Oct 12 17:15:59 np0005481680 python3.9[243079]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 12 17:15:59 np0005481680 podman[243085]: 2025-10-12 21:15:59.382376519 +0000 UTC m=+0.078266494 container create 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:15:59 np0005481680 systemd[1]: Started libpod-conmon-84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77.scope.
Oct 12 17:15:59 np0005481680 podman[243085]: 2025-10-12 21:15:59.353804352 +0000 UTC m=+0.049694367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:15:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:15:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e4744aaf9316c050496670a74ea9ba43509d314b753d18492fcf8d1da8103e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e4744aaf9316c050496670a74ea9ba43509d314b753d18492fcf8d1da8103e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e4744aaf9316c050496670a74ea9ba43509d314b753d18492fcf8d1da8103e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e4744aaf9316c050496670a74ea9ba43509d314b753d18492fcf8d1da8103e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:15:59 np0005481680 podman[243085]: 2025-10-12 21:15:59.494629767 +0000 UTC m=+0.190519762 container init 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:15:59 np0005481680 podman[243085]: 2025-10-12 21:15:59.516295309 +0000 UTC m=+0.212185284 container start 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:15:59 np0005481680 podman[243085]: 2025-10-12 21:15:59.530838319 +0000 UTC m=+0.226728284 container attach 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:15:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:15:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:15:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:15:59.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e000035a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:00 np0005481680 python3.9[243314]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:16:00 np0005481680 lvm[243330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:16:00 np0005481680 lvm[243330]: VG ceph_vg0 finished
Oct 12 17:16:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:00 np0005481680 lvm[243353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:16:00 np0005481680 lvm[243353]: VG ceph_vg0 finished
Oct 12 17:16:00 np0005481680 clever_mcclintock[243101]: {}
Oct 12 17:16:00 np0005481680 systemd[1]: libpod-84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77.scope: Deactivated successfully.
Oct 12 17:16:00 np0005481680 podman[243085]: 2025-10-12 21:16:00.426586464 +0000 UTC m=+1.122476409 container died 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:16:00 np0005481680 systemd[1]: libpod-84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77.scope: Consumed 1.445s CPU time.
Oct 12 17:16:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-81e4744aaf9316c050496670a74ea9ba43509d314b753d18492fcf8d1da8103e-merged.mount: Deactivated successfully.
Oct 12 17:16:00 np0005481680 podman[243085]: 2025-10-12 21:16:00.485831812 +0000 UTC m=+1.181721757 container remove 84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:16:00 np0005481680 systemd[1]: libpod-conmon-84b8245628135679a84b60b89ee1db9e42dc57091ea281dedbbc4953afb81b77.scope: Deactivated successfully.
Oct 12 17:16:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:16:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:16:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:16:00 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:16:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:16:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:00 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:01 np0005481680 python3.9[243526]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 12 17:16:01 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:16:01 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:16:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:01.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:16:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:16:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:16:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:16:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:02 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:03 np0005481680 python3[243705]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:16:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:16:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:16:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:03.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e200031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:04 np0005481680 podman[243720]: 2025-10-12 21:16:04.610231715 +0000 UTC m=+1.238072941 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 12 17:16:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:04 np0005481680 podman[243777]: 2025-10-12 21:16:04.774120727 +0000 UTC m=+0.054765845 container create af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:16:04 np0005481680 podman[243777]: 2025-10-12 21:16:04.746282848 +0000 UTC m=+0.026927966 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 12 17:16:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:04 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:04 np0005481680 python3[243705]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 12 17:16:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:05 np0005481680 python3.9[243969]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:16:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:06 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:06 np0005481680 python3.9[244123]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:07.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:16:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:07.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:16:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:07.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:16:07 np0005481680 podman[244200]: 2025-10-12 21:16:07.257622485 +0000 UTC m=+0.118305533 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 12 17:16:07 np0005481680 python3.9[244201]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:16:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:07.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:08 np0005481680 podman[244350]: 2025-10-12 21:16:08.139362393 +0000 UTC m=+0.089540691 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 12 17:16:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e0c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:08 np0005481680 python3.9[244398]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760303767.5434227-2474-250293737045184/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:08 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:08 np0005481680 python3.9[244474]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:16:08 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:09 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:09 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:10 np0005481680 python3.9[244587]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:10 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:10 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:10 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:10 np0005481680 systemd[1]: Starting multipathd container...
Oct 12 17:16:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:16:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ad0f809537d8a550f7d72bba319141be93f0b669c7ac5c1253519f1fafbc91/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ad0f809537d8a550f7d72bba319141be93f0b669c7ac5c1253519f1fafbc91/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:10 np0005481680 systemd[1]: Started /usr/bin/podman healthcheck run af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950.
Oct 12 17:16:10 np0005481680 podman[244628]: 2025-10-12 21:16:10.598588183 +0000 UTC m=+0.148857501 container init af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:16:10 np0005481680 multipathd[244643]: + sudo -E kolla_set_configs
Oct 12 17:16:10 np0005481680 podman[244628]: 2025-10-12 21:16:10.631813229 +0000 UTC m=+0.182082507 container start af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:16:10 np0005481680 podman[244628]: multipathd
Oct 12 17:16:10 np0005481680 systemd[1]: Started multipathd container.
Oct 12 17:16:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:16:10 np0005481680 multipathd[244643]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:16:10 np0005481680 multipathd[244643]: INFO:__main__:Validating config file
Oct 12 17:16:10 np0005481680 multipathd[244643]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:16:10 np0005481680 multipathd[244643]: INFO:__main__:Writing out command to execute
Oct 12 17:16:10 np0005481680 multipathd[244643]: ++ cat /run_command
Oct 12 17:16:10 np0005481680 multipathd[244643]: + CMD='/usr/sbin/multipathd -d'
Oct 12 17:16:10 np0005481680 multipathd[244643]: + ARGS=
Oct 12 17:16:10 np0005481680 multipathd[244643]: + sudo kolla_copy_cacerts
Oct 12 17:16:10 np0005481680 podman[244649]: 2025-10-12 21:16:10.733027856 +0000 UTC m=+0.090490355 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 12 17:16:10 np0005481680 systemd[1]: af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-3f644f40fb983ea8.service: Main process exited, code=exited, status=1/FAILURE
Oct 12 17:16:10 np0005481680 systemd[1]: af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-3f644f40fb983ea8.service: Failed with result 'exit-code'.
Oct 12 17:16:10 np0005481680 multipathd[244643]: + [[ ! -n '' ]]
Oct 12 17:16:10 np0005481680 multipathd[244643]: + . kolla_extend_start
Oct 12 17:16:10 np0005481680 multipathd[244643]: Running command: '/usr/sbin/multipathd -d'
Oct 12 17:16:10 np0005481680 multipathd[244643]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 12 17:16:10 np0005481680 multipathd[244643]: + umask 0022
Oct 12 17:16:10 np0005481680 multipathd[244643]: + exec /usr/sbin/multipathd -d
Oct 12 17:16:10 np0005481680 multipathd[244643]: 3437.453293 | --------start up--------
Oct 12 17:16:10 np0005481680 multipathd[244643]: 3437.453313 | read /etc/multipath.conf
Oct 12 17:16:10 np0005481680 multipathd[244643]: 3437.461661 | path checkers start up
Oct 12 17:16:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:10 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:11 np0005481680 python3.9[244833]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:16:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:16:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:16:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:16:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:16:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:12.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:12 np0005481680 python3.9[244988]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:16:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:12 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:13 np0005481680 python3.9[245154]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:16:13 np0005481680 systemd[1]: Stopping multipathd container...
Oct 12 17:16:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:13 np0005481680 multipathd[244643]: 3440.621501 | exit (signal)
Oct 12 17:16:13 np0005481680 multipathd[244643]: 3440.621585 | --------shut down-------
Oct 12 17:16:13 np0005481680 systemd[1]: libpod-af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950.scope: Deactivated successfully.
Oct 12 17:16:13 np0005481680 podman[245159]: 2025-10-12 21:16:13.968272792 +0000 UTC m=+0.092747632 container died af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:16:13 np0005481680 systemd[1]: af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-3f644f40fb983ea8.timer: Deactivated successfully.
Oct 12 17:16:13 np0005481680 systemd[1]: Stopped /usr/bin/podman healthcheck run af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950.
Oct 12 17:16:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-userdata-shm.mount: Deactivated successfully.
Oct 12 17:16:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-45ad0f809537d8a550f7d72bba319141be93f0b669c7ac5c1253519f1fafbc91-merged.mount: Deactivated successfully.
Oct 12 17:16:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:14 np0005481680 podman[245159]: 2025-10-12 21:16:14.198089193 +0000 UTC m=+0.322564033 container cleanup af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:16:14 np0005481680 podman[245159]: multipathd
Oct 12 17:16:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e140040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:14 np0005481680 podman[245188]: multipathd
Oct 12 17:16:14 np0005481680 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 12 17:16:14 np0005481680 systemd[1]: Stopped multipathd container.
Oct 12 17:16:14 np0005481680 systemd[1]: Starting multipathd container...
Oct 12 17:16:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:14.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:16:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ad0f809537d8a550f7d72bba319141be93f0b669c7ac5c1253519f1fafbc91/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ad0f809537d8a550f7d72bba319141be93f0b669c7ac5c1253519f1fafbc91/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:14 np0005481680 systemd[1]: Started /usr/bin/podman healthcheck run af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950.
Oct 12 17:16:14 np0005481680 podman[245201]: 2025-10-12 21:16:14.518678635 +0000 UTC m=+0.175330055 container init af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:16:14 np0005481680 multipathd[245216]: + sudo -E kolla_set_configs
Oct 12 17:16:14 np0005481680 podman[245201]: 2025-10-12 21:16:14.563188298 +0000 UTC m=+0.219839658 container start af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 12 17:16:14 np0005481680 podman[245201]: multipathd
Oct 12 17:16:14 np0005481680 systemd[1]: Started multipathd container.
Oct 12 17:16:14 np0005481680 multipathd[245216]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:16:14 np0005481680 multipathd[245216]: INFO:__main__:Validating config file
Oct 12 17:16:14 np0005481680 multipathd[245216]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:16:14 np0005481680 multipathd[245216]: INFO:__main__:Writing out command to execute
Oct 12 17:16:14 np0005481680 multipathd[245216]: ++ cat /run_command
Oct 12 17:16:14 np0005481680 multipathd[245216]: + CMD='/usr/sbin/multipathd -d'
Oct 12 17:16:14 np0005481680 multipathd[245216]: + ARGS=
Oct 12 17:16:14 np0005481680 multipathd[245216]: + sudo kolla_copy_cacerts
Oct 12 17:16:14 np0005481680 multipathd[245216]: + [[ ! -n '' ]]
Oct 12 17:16:14 np0005481680 multipathd[245216]: + . kolla_extend_start
Oct 12 17:16:14 np0005481680 multipathd[245216]: Running command: '/usr/sbin/multipathd -d'
Oct 12 17:16:14 np0005481680 multipathd[245216]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 12 17:16:14 np0005481680 multipathd[245216]: + umask 0022
Oct 12 17:16:14 np0005481680 multipathd[245216]: + exec /usr/sbin/multipathd -d
Oct 12 17:16:14 np0005481680 podman[245223]: 2025-10-12 21:16:14.680982246 +0000 UTC m=+0.100879189 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 12 17:16:14 np0005481680 systemd[1]: af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-753cd14dd9edbbff.service: Main process exited, code=exited, status=1/FAILURE
Oct 12 17:16:14 np0005481680 systemd[1]: af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950-753cd14dd9edbbff.service: Failed with result 'exit-code'.
Oct 12 17:16:14 np0005481680 multipathd[245216]: 3441.388303 | --------start up--------
Oct 12 17:16:14 np0005481680 multipathd[245216]: 3441.388332 | read /etc/multipath.conf
Oct 12 17:16:14 np0005481680 multipathd[245216]: 3441.396896 | path checkers start up
Oct 12 17:16:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:14 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e20002a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:15 np0005481680 python3.9[245409]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:15.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:16 np0005481680 kernel: ganesha.nfsd[243328]: segfault at 50 ip 00007f0ee016532e sp 00007f0ea0ff8210 error 4 in libntirpc.so.5.8[7f0ee014a000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 12 17:16:16 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:16:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[208258]: 12/10/2025 21:16:16 : epoch 68ec19ba : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0e08001090 fd 39 proxy ignored for local
Oct 12 17:16:16 np0005481680 systemd[1]: Started Process Core Dump (PID 245534/UID 0).
Oct 12 17:16:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:16.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:16 np0005481680 python3.9[245564]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 12 17:16:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:17.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:16:17 np0005481680 systemd-coredump[245545]: Process 208285 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 70:#012#0  0x00007f0ee016532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:16:17 np0005481680 python3.9[245717]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 12 17:16:17 np0005481680 kernel: Key type psk registered
Oct 12 17:16:17 np0005481680 systemd[1]: systemd-coredump@8-245534-0.service: Deactivated successfully.
Oct 12 17:16:17 np0005481680 podman[245723]: 2025-10-12 21:16:17.580521356 +0000 UTC m=+0.052032775 container died 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:16:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-13735665cc5fe667d104b1ca700fbdf9423c1718e2b99281aaa46f564969a26d-merged.mount: Deactivated successfully.
Oct 12 17:16:17 np0005481680 podman[245723]: 2025-10-12 21:16:17.638269837 +0000 UTC m=+0.109781196 container remove 4aaca687512feae0dd76c66b68777905951ab1acfe4557c35d0945b1f661cb9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:16:17 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:16:17 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:16:17 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.816s CPU time.
Oct 12 17:16:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:16:18
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.nfs', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'vms']
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:16:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:16:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:16:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:16:18.350 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:16:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:16:18.350 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:16:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:16:18.350 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:16:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:18.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:18 np0005481680 python3.9[245927]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:16:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:19 np0005481680 python3.9[246054]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760303777.791415-2714-51315070032968/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:20 np0005481680 python3.9[246229]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:16:21 np0005481680 python3.9[246381]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:16:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:16:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:16:22 np0005481680 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 12 17:16:22 np0005481680 systemd[1]: Stopped Load Kernel Modules.
Oct 12 17:16:22 np0005481680 systemd[1]: Stopping Load Kernel Modules...
Oct 12 17:16:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211622 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:16:22 np0005481680 systemd[1]: Starting Load Kernel Modules...
Oct 12 17:16:22 np0005481680 systemd[1]: Finished Load Kernel Modules.
Oct 12 17:16:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:22.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:23 np0005481680 python3.9[246539]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 12 17:16:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:16:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:23.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:16:24 np0005481680 podman[246597]: 2025-10-12 21:16:24.016270435 +0000 UTC m=+0.087401657 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:16:24 np0005481680 python3.9[246644]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 12 17:16:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:24.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211625 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:16:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:25.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:16:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:27.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 9.
Oct 12 17:16:28 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:16:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.816s CPU time.
Oct 12 17:16:28 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:16:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:28.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:28 np0005481680 podman[246702]: 2025-10-12 21:16:28.38939401 +0000 UTC m=+0.076872468 container create c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:16:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270a446daa07ffbbb7b16dea3c52d613803736216c0cc5eb23f6692599a5bcc8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:28 np0005481680 podman[246702]: 2025-10-12 21:16:28.359457249 +0000 UTC m=+0.046935767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:16:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270a446daa07ffbbb7b16dea3c52d613803736216c0cc5eb23f6692599a5bcc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270a446daa07ffbbb7b16dea3c52d613803736216c0cc5eb23f6692599a5bcc8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270a446daa07ffbbb7b16dea3c52d613803736216c0cc5eb23f6692599a5bcc8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:16:28 np0005481680 podman[246702]: 2025-10-12 21:16:28.476614902 +0000 UTC m=+0.164093350 container init c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:16:28 np0005481680 podman[246702]: 2025-10-12 21:16:28.490666619 +0000 UTC m=+0.178145087 container start c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:16:28 np0005481680 bash[246702]: c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:16:28 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:16:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:28 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:16:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:29.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:16:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:30.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:16:30 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:30 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:30 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:16:30 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:30 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:30 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:31 np0005481680 systemd-logind[783]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 12 17:16:31 np0005481680 systemd-logind[783]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 12 17:16:31 np0005481680 lvm[246871]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:16:31 np0005481680 lvm[246871]: VG ceph_vg0 finished
Oct 12 17:16:31 np0005481680 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 12 17:16:31 np0005481680 systemd[1]: Starting man-db-cache-update.service...
Oct 12 17:16:31 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:31 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:31 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:32 np0005481680 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 12 17:16:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:16:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:16:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:32.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:16:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:16:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:16:33 np0005481680 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 12 17:16:33 np0005481680 systemd[1]: Finished man-db-cache-update.service.
Oct 12 17:16:33 np0005481680 systemd[1]: man-db-cache-update.service: Consumed 2.180s CPU time.
Oct 12 17:16:33 np0005481680 systemd[1]: run-r3b7f0fdb1def489cbc8639c236368146.service: Deactivated successfully.
Oct 12 17:16:33 np0005481680 python3.9[248216]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:33.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:34.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:34 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:16:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:34 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:16:34 np0005481680 python3.9[248368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 12 17:16:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:16:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:36 np0005481680 python3.9[248526]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:36.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:16:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:16:37 np0005481680 podman[248651]: 2025-10-12 21:16:37.615390197 +0000 UTC m=+0.136883486 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:16:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:37 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:16:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:37 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:16:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:37 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:16:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:37 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:16:37 np0005481680 python3.9[248697]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:16:37 np0005481680 systemd[1]: Reloading.
Oct 12 17:16:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:37 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:16:37 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:16:38 np0005481680 podman[248740]: 2025-10-12 21:16:38.347563297 +0000 UTC m=+0.096872068 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 12 17:16:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 12 17:16:39 np0005481680 python3.9[248909]: ansible-ansible.builtin.service_facts Invoked
Oct 12 17:16:39 np0005481680 network[248952]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 12 17:16:39 np0005481680 network[248953]: 'network-scripts' will be removed from distribution in near future.
Oct 12 17:16:39 np0005481680 network[248954]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 12 17:16:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:16:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:40.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:16:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:16:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:16:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:40 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3970000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:42] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:16:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:42] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:16:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:42 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:42 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f394c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:42.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:16:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:42 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3944000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:44 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3970001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211644 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:16:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:44 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:44.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:16:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:44 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f394c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:45 np0005481680 podman[249125]: 2025-10-12 21:16:45.171607171 +0000 UTC m=+0.126011959 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:16:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211645 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:16:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:45 np0005481680 python3.9[249273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:45.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:46 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:46 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3970001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Oct 12 17:16:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:46 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:47.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:16:47 np0005481680 python3.9[249428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:47.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:48 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f394c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:48 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:16:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:16:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:16:48 np0005481680 python3.9[249582]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Oct 12 17:16:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:48 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39700089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:49 np0005481680 python3.9[249736]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:49.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:50 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:50 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f394c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:50 np0005481680 python3.9[249890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Oct 12 17:16:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:50 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:51 np0005481680 python3.9[250044]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:51.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:52] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:16:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:16:52] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:16:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:52 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39700089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:52 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:52 np0005481680 python3.9[250198]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:16:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:52 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f394c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:53 np0005481680 python3.9[250352]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:16:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:53.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:54 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3944002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:54 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f39700096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:54 np0005481680 podman[250478]: 2025-10-12 21:16:54.327338409 +0000 UTC m=+0.091950361 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:16:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:54 np0005481680 python3.9[250525]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:16:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:54 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:55 np0005481680 python3.9[250679]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:16:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:55.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:56 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:16:56 np0005481680 python3.9[250832]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[246717]: 12/10/2025 21:16:56 : epoch 68ec1aac : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3964001c00 fd 38 proxy ignored for local
Oct 12 17:16:56 np0005481680 kernel: ganesha.nfsd[248990]: segfault at 50 ip 00007f3a1c20b32e sp 00007f39ea7fb210 error 4 in libntirpc.so.5.8[7f3a1c1f0000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 12 17:16:56 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:16:56 np0005481680 systemd[1]: Started Process Core Dump (PID 250857/UID 0).
Oct 12 17:16:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:16:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:16:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:57 np0005481680 python3.9[250986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:16:57.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:16:57 np0005481680 systemd-coredump[250858]: Process 246721 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f3a1c20b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:16:57 np0005481680 systemd[1]: systemd-coredump@9-250857-0.service: Deactivated successfully.
Oct 12 17:16:57 np0005481680 systemd[1]: systemd-coredump@9-250857-0.service: Consumed 1.190s CPU time.
Oct 12 17:16:57 np0005481680 podman[251145]: 2025-10-12 21:16:57.659128324 +0000 UTC m=+0.036913260 container died c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:16:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-270a446daa07ffbbb7b16dea3c52d613803736216c0cc5eb23f6692599a5bcc8-merged.mount: Deactivated successfully.
Oct 12 17:16:57 np0005481680 podman[251145]: 2025-10-12 21:16:57.723687458 +0000 UTC m=+0.101472394 container remove c42fc3f60f79d5ae98fce8d1c3eb9fc40f6b416c197bd33bc97bdcbffaa50d24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:16:57 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:16:57 np0005481680 python3.9[251140]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:57.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:57 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:16:57 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.749s CPU time.
Oct 12 17:16:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:16:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:16:58 np0005481680 python3.9[251338]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:16:59 np0005481680 python3.9[251491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:16:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:16:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:16:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:16:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:00 np0005481680 python3.9[251669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:00.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:17:01 np0005481680 python3.9[251821]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:17:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:17:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:01 np0005481680 python3.9[252046]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:02] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:17:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:02] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211702 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:17:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:17:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:02.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:02 np0005481680 python3.9[252247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.057172873 +0000 UTC m=+0.077671187 container create 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:17:03 np0005481680 systemd[1]: Started libpod-conmon-86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b.scope.
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.025045265 +0000 UTC m=+0.045543629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:03 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.180763901 +0000 UTC m=+0.201262265 container init 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.19056487 +0000 UTC m=+0.211063184 container start 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.195480505 +0000 UTC m=+0.215978819 container attach 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:17:03 np0005481680 distracted_villani[252430]: 167 167
Oct 12 17:17:03 np0005481680 systemd[1]: libpod-86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b.scope: Deactivated successfully.
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.201241161 +0000 UTC m=+0.221739685 container died 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:17:03 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5b66648dfcd5e89167499e92d965efef8f24abbde4f39cd9a1042ca4564bb1f4-merged.mount: Deactivated successfully.
Oct 12 17:17:03 np0005481680 podman[252378]: 2025-10-12 21:17:03.255923403 +0000 UTC m=+0.276421717 container remove 86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:17:03 np0005481680 systemd[1]: libpod-conmon-86dd2f3067d774288e5ed075f86c7ff4bfd5a54d6a0a7da64d44324dce5a142b.scope: Deactivated successfully.
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:17:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:17:03 np0005481680 python3.9[252487]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:03 np0005481680 podman[252493]: 2025-10-12 21:17:03.505656912 +0000 UTC m=+0.064470132 container create 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:17:03 np0005481680 systemd[1]: Started libpod-conmon-6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca.scope.
Oct 12 17:17:03 np0005481680 podman[252493]: 2025-10-12 21:17:03.474382645 +0000 UTC m=+0.033195915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:03 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:03 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:03 np0005481680 podman[252493]: 2025-10-12 21:17:03.633941937 +0000 UTC m=+0.192755127 container init 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:17:03 np0005481680 podman[252493]: 2025-10-12 21:17:03.648278523 +0000 UTC m=+0.207091733 container start 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 12 17:17:03 np0005481680 podman[252493]: 2025-10-12 21:17:03.65330016 +0000 UTC m=+0.212113370 container attach 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:17:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:03.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:04 np0005481680 festive_lovelace[252513]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:17:04 np0005481680 festive_lovelace[252513]: --> All data devices are unavailable
Oct 12 17:17:04 np0005481680 podman[252493]: 2025-10-12 21:17:04.062778346 +0000 UTC m=+0.621591556 container died 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:17:04 np0005481680 systemd[1]: libpod-6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca.scope: Deactivated successfully.
Oct 12 17:17:04 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ddcce7bf03be989d5a049e18e32acc646aeb3db106118a475445b34d2fa5ddc7-merged.mount: Deactivated successfully.
Oct 12 17:17:04 np0005481680 podman[252493]: 2025-10-12 21:17:04.120443224 +0000 UTC m=+0.679256444 container remove 6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:17:04 np0005481680 systemd[1]: libpod-conmon-6fcc41cae47c5eabc47b3182f413283f6be06b77d07289836eb37a3e77e234ca.scope: Deactivated successfully.
Oct 12 17:17:04 np0005481680 python3.9[252683]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:17:04 np0005481680 podman[252900]: 2025-10-12 21:17:04.864011804 +0000 UTC m=+0.063669512 container create 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:17:04 np0005481680 systemd[1]: Started libpod-conmon-327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba.scope.
Oct 12 17:17:04 np0005481680 podman[252900]: 2025-10-12 21:17:04.83909673 +0000 UTC m=+0.038754478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:04 np0005481680 podman[252900]: 2025-10-12 21:17:04.9914943 +0000 UTC m=+0.191152048 container init 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:17:05 np0005481680 podman[252900]: 2025-10-12 21:17:05.040373345 +0000 UTC m=+0.240031023 container start 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 17:17:05 np0005481680 podman[252900]: 2025-10-12 21:17:05.043933755 +0000 UTC m=+0.243591523 container attach 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:17:05 np0005481680 optimistic_ramanujan[252947]: 167 167
Oct 12 17:17:05 np0005481680 systemd[1]: libpod-327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba.scope: Deactivated successfully.
Oct 12 17:17:05 np0005481680 conmon[252947]: conmon 327060f93be977c2270d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba.scope/container/memory.events
Oct 12 17:17:05 np0005481680 podman[252900]: 2025-10-12 21:17:05.049032285 +0000 UTC m=+0.248689993 container died 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:17:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5c397687fa4d712bdc41a616f315c9912db80d3429f0fdce37561db08f62d972-merged.mount: Deactivated successfully.
Oct 12 17:17:05 np0005481680 python3.9[252946]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:05 np0005481680 podman[252900]: 2025-10-12 21:17:05.109707319 +0000 UTC m=+0.309364987 container remove 327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:17:05 np0005481680 systemd[1]: libpod-conmon-327060f93be977c2270d13b6f940fbf394c3ab663e1fdd4f70196aa75c262cba.scope: Deactivated successfully.
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.304259232 +0000 UTC m=+0.056261733 container create b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:17:05 np0005481680 systemd[1]: Started libpod-conmon-b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d.scope.
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.278665281 +0000 UTC m=+0.030667782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:05 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3b303854448447defcdb4469fdc83194c7532a3c3bc4671213d514f53c2dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3b303854448447defcdb4469fdc83194c7532a3c3bc4671213d514f53c2dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3b303854448447defcdb4469fdc83194c7532a3c3bc4671213d514f53c2dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3b303854448447defcdb4469fdc83194c7532a3c3bc4671213d514f53c2dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.423604862 +0000 UTC m=+0.175607363 container init b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.432468657 +0000 UTC m=+0.184471148 container start b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.436705775 +0000 UTC m=+0.188708266 container attach b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:17:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:05 np0005481680 boring_hugle[253062]: {
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:    "0": [
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:        {
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "devices": [
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "/dev/loop3"
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            ],
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "lv_name": "ceph_lv0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "lv_size": "21470642176",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "name": "ceph_lv0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "tags": {
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.cluster_name": "ceph",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.crush_device_class": "",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.encrypted": "0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.osd_id": "0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.type": "block",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.vdo": "0",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:                "ceph.with_tpm": "0"
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            },
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "type": "block",
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:            "vg_name": "ceph_vg0"
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:        }
Oct 12 17:17:05 np0005481680 boring_hugle[253062]:    ]
Oct 12 17:17:05 np0005481680 boring_hugle[253062]: }
Oct 12 17:17:05 np0005481680 systemd[1]: libpod-b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d.scope: Deactivated successfully.
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.811505237 +0000 UTC m=+0.563507708 container died b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 17:17:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-af3b303854448447defcdb4469fdc83194c7532a3c3bc4671213d514f53c2dd6-merged.mount: Deactivated successfully.
Oct 12 17:17:05 np0005481680 podman[252999]: 2025-10-12 21:17:05.856507443 +0000 UTC m=+0.608509904 container remove b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:17:05 np0005481680 systemd[1]: libpod-conmon-b4fd598ed918836fceba3eec156f833867e893ac0acc296db79ced9b5115781d.scope: Deactivated successfully.
Oct 12 17:17:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:05.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:05 np0005481680 python3.9[253149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.73703057 +0000 UTC m=+0.079659759 container create d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:17:06 np0005481680 systemd[1]: Started libpod-conmon-d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d.scope.
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.702650734 +0000 UTC m=+0.045279973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.834495071 +0000 UTC m=+0.177124300 container init d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:17:06 np0005481680 python3.9[253407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.846018354 +0000 UTC m=+0.188647513 container start d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.849679368 +0000 UTC m=+0.192308607 container attach d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:17:06 np0005481680 zealous_dijkstra[253424]: 167 167
Oct 12 17:17:06 np0005481680 systemd[1]: libpod-d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d.scope: Deactivated successfully.
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.856110302 +0000 UTC m=+0.198739451 container died d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:17:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-397853c3be8646d3b22ce75df2befa15f87f29c4cc91369d397442454162d79f-merged.mount: Deactivated successfully.
Oct 12 17:17:06 np0005481680 podman[253408]: 2025-10-12 21:17:06.907277244 +0000 UTC m=+0.249906393 container remove d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dijkstra, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:17:06 np0005481680 systemd[1]: libpod-conmon-d628dc05dea72a140ec23733447205ef3f47776e7c454e2ae714ef61e8d2e92d.scope: Deactivated successfully.
Oct 12 17:17:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:07.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:17:07 np0005481680 podman[253474]: 2025-10-12 21:17:07.174846986 +0000 UTC m=+0.075328039 container create c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:17:07 np0005481680 systemd[1]: Started libpod-conmon-c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403.scope.
Oct 12 17:17:07 np0005481680 podman[253474]: 2025-10-12 21:17:07.145979051 +0000 UTC m=+0.046460164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:17:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7729b45945eb4098eb2d4447ac65310ca0ef56937ad0ed08ed3ad38649497a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7729b45945eb4098eb2d4447ac65310ca0ef56937ad0ed08ed3ad38649497a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7729b45945eb4098eb2d4447ac65310ca0ef56937ad0ed08ed3ad38649497a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7729b45945eb4098eb2d4447ac65310ca0ef56937ad0ed08ed3ad38649497a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:07 np0005481680 podman[253474]: 2025-10-12 21:17:07.302534776 +0000 UTC m=+0.203015879 container init c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:17:07 np0005481680 podman[253474]: 2025-10-12 21:17:07.315147397 +0000 UTC m=+0.215628430 container start c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:17:07 np0005481680 podman[253474]: 2025-10-12 21:17:07.318823821 +0000 UTC m=+0.219304924 container attach c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:17:07 np0005481680 podman[253643]: 2025-10-12 21:17:07.880508171 +0000 UTC m=+0.131884978 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 12 17:17:07 np0005481680 python3.9[253646]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:07.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:08 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 10.
Oct 12 17:17:08 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:17:08 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.749s CPU time.
Oct 12 17:17:08 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:17:08 np0005481680 lvm[253760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:17:08 np0005481680 lvm[253760]: VG ceph_vg0 finished
Oct 12 17:17:08 np0005481680 quirky_neumann[253492]: {}
Oct 12 17:17:08 np0005481680 podman[253474]: 2025-10-12 21:17:08.210355459 +0000 UTC m=+1.110836492 container died c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:17:08 np0005481680 systemd[1]: libpod-c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403.scope: Deactivated successfully.
Oct 12 17:17:08 np0005481680 systemd[1]: libpod-c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403.scope: Consumed 1.457s CPU time.
Oct 12 17:17:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a7729b45945eb4098eb2d4447ac65310ca0ef56937ad0ed08ed3ad38649497a2-merged.mount: Deactivated successfully.
Oct 12 17:17:08 np0005481680 podman[253474]: 2025-10-12 21:17:08.271819514 +0000 UTC m=+1.172300547 container remove c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:17:08 np0005481680 systemd[1]: libpod-conmon-c3a9f396b65647441becb961ab34699fe5242d5368f60facf822f3b0117f9403.scope: Deactivated successfully.
Oct 12 17:17:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:17:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:17:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:08 np0005481680 podman[253859]: 2025-10-12 21:17:08.395911253 +0000 UTC m=+0.070815094 container create 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:17:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740fc2371dfeb72c46c328020b4ff255b802c4159a5519db1002360c71e4940b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740fc2371dfeb72c46c328020b4ff255b802c4159a5519db1002360c71e4940b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740fc2371dfeb72c46c328020b4ff255b802c4159a5519db1002360c71e4940b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740fc2371dfeb72c46c328020b4ff255b802c4159a5519db1002360c71e4940b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:08 np0005481680 podman[253859]: 2025-10-12 21:17:08.365990911 +0000 UTC m=+0.040894802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:08 np0005481680 podman[253859]: 2025-10-12 21:17:08.470891072 +0000 UTC m=+0.145794953 container init 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:17:08 np0005481680 podman[253859]: 2025-10-12 21:17:08.485627157 +0000 UTC m=+0.160530998 container start 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:17:08 np0005481680 bash[253859]: 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:17:08 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:17:08 np0005481680 podman[253891]: 2025-10-12 21:17:08.557787274 +0000 UTC m=+0.112791903 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:17:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:08 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:17:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:17:08 np0005481680 python3.9[254037]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 12 17:17:09 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:09 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:17:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:09.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:10 np0005481680 python3.9[254191]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:17:10 np0005481680 systemd[1]: Reloading.
Oct 12 17:17:10 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:17:10 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:17:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:11 np0005481680 python3.9[254378]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:17:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:17:12 np0005481680 python3.9[254532]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:13 np0005481680 python3.9[254685]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:14 np0005481680 python3.9[254840]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:14 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:17:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:14 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:17:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:17:15 np0005481680 python3.9[254993]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:15 np0005481680 podman[255120]: 2025-10-12 21:17:15.779866333 +0000 UTC m=+0.118911799 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 12 17:17:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:15.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:15 np0005481680 python3.9[255165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:17:16 np0005481680 python3.9[255321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:17.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:17:17 np0005481680 python3.9[255475]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 12 17:17:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:17:18
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:17:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:17:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:17:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:17:18.354 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:17:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:17:18.360 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:17:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:17:18.360 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:18.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:17:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:17:19 np0005481680 python3.9[255630]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:19.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:20 np0005481680 python3.9[255808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:17:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:17:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:17:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:20 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:21 np0005481680 python3.9[255971]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:21 np0005481680 python3.9[256129]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:22] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:22] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:22 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c000da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:22 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb428000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:22.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:22 np0005481680 python3.9[256281]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:17:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:22 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:23 np0005481680 python3.9[256434]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:24 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211724 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:17:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:24 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c000da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:24 np0005481680 python3.9[256587]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:24.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:17:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:24 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:24 np0005481680 podman[256711]: 2025-10-12 21:17:24.9961318 +0000 UTC m=+0.093749569 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:17:25 np0005481680 python3.9[256753]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:26 np0005481680 python3.9[256909]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:26 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:26 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:26.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:17:26 np0005481680 python3.9[257061]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:26 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:27.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:17:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:27.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:17:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211727 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:17:27 np0005481680 python3.9[257214]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:28 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:28 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:28 np0005481680 python3.9[257367]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:17:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:28 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:30 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:30 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:17:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:30 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:32] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:32] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:32 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:32 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:17:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:32 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb428002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:17:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:17:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:33 np0005481680 python3.9[257525]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 12 17:17:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:34 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c0091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:34 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:34 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c001ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:34 np0005481680 python3.9[257678]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 12 17:17:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:35 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:17:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:35.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:36 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb428002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:36 np0005481680 python3.9[257838]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 12 17:17:36 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:17:36 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:17:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:36 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:36 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:37.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:17:37 np0005481680 systemd-logind[783]: New session 57 of user zuul.
Oct 12 17:17:37 np0005481680 systemd[1]: Started Session 57 of User zuul.
Oct 12 17:17:37 np0005481680 systemd[1]: session-57.scope: Deactivated successfully.
Oct 12 17:17:37 np0005481680 systemd-logind[783]: Session 57 logged out. Waiting for processes to exit.
Oct 12 17:17:37 np0005481680 systemd-logind[783]: Removed session 57.
Oct 12 17:17:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:37.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:38 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb430002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:38 np0005481680 python3.9[258027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:38 np0005481680 podman[258028]: 2025-10-12 21:17:38.171320214 +0000 UTC m=+0.128489352 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 12 17:17:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:38 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb44c009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:38 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:17:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:38 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:17:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:38.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:38 np0005481680 python3.9[258175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303857.5588083-4351-128204967848802/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:38 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb424000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:38 np0005481680 podman[258176]: 2025-10-12 21:17:38.937621984 +0000 UTC m=+0.094369165 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 12 17:17:39 np0005481680 python3.9[258369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:39.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:40 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb428002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:17:40 np0005481680 python3.9[258448]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[253902]: 12/10/2025 21:17:40 : epoch 68ec1ad4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb43c002fa0 fd 39 proxy ignored for local
Oct 12 17:17:40 np0005481680 kernel: ganesha.nfsd[255956]: segfault at 50 ip 00007fb4f9e1132e sp 00007fb4bf7fd210 error 4 in libntirpc.so.5.8[7fb4f9df6000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 12 17:17:40 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:17:40 np0005481680 systemd[1]: Started Process Core Dump (PID 258491/UID 0).
Oct 12 17:17:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:40.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:17:40 np0005481680 python3.9[258600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:41 np0005481680 systemd-coredump[258497]: Process 253921 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007fb4f9e1132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:17:41 np0005481680 systemd[1]: systemd-coredump@10-258491-0.service: Deactivated successfully.
Oct 12 17:17:41 np0005481680 systemd[1]: systemd-coredump@10-258491-0.service: Consumed 1.073s CPU time.
Oct 12 17:17:41 np0005481680 podman[258727]: 2025-10-12 21:17:41.640129547 +0000 UTC m=+0.047681181 container died 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:17:41 np0005481680 python3.9[258723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303860.3491564-4351-262408704287163/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-740fc2371dfeb72c46c328020b4ff255b802c4159a5519db1002360c71e4940b-merged.mount: Deactivated successfully.
Oct 12 17:17:41 np0005481680 podman[258727]: 2025-10-12 21:17:41.700639519 +0000 UTC m=+0.108191093 container remove 8b1631445cda08ac34f9e253d6100c7af573b9bd173739f0ef13ce86d239d4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:17:41 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:17:41 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:17:41 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.497s CPU time.
Oct 12 17:17:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:42] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:42] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:17:42 np0005481680 python3.9[258920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:17:43 np0005481680 python3.9[259041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303861.84077-4351-267179131144515/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:43 np0005481680 python3.9[259193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:44 np0005481680 python3.9[259314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303863.2701437-4351-192062538387556/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:17:45 np0005481680 python3.9[259467]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:46 np0005481680 podman[259545]: 2025-10-12 21:17:46.151550589 +0000 UTC m=+0.115519058 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Oct 12 17:17:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211746 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:17:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:46 np0005481680 python3.9[259640]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:17:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:17:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:47.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:17:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:47.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:17:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:47.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:17:47 np0005481680 python3.9[259793]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:17:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211747 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:17:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:17:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:17:48 np0005481680 python3.9[259946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:17:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:17:49 np0005481680 python3.9[260069]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760303867.7403944-4630-211051983895074/.source _original_basename=.gsfyi1az follow=False checksum=d074717e807381e578c75ca5767b305436d8bc4f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 12 17:17:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:50 np0005481680 python3.9[260223]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:17:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:17:50 np0005481680 python3.9[260375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:51 np0005481680 python3.9[260497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303870.3807433-4708-191813763768744/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:52 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 11.
Oct 12 17:17:52 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:17:52 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.497s CPU time.
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:17:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:17:52] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:17:52 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:17:52 np0005481680 podman[260673]: 2025-10-12 21:17:52.408370375 +0000 UTC m=+0.074653600 container create 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:17:52 np0005481680 podman[260673]: 2025-10-12 21:17:52.376429411 +0000 UTC m=+0.042712636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:17:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125c54a8fb6f4cc8f475f6c0c9679418a0960e4bfb0c9a75f0e8c0404c387ec4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125c54a8fb6f4cc8f475f6c0c9679418a0960e4bfb0c9a75f0e8c0404c387ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125c54a8fb6f4cc8f475f6c0c9679418a0960e4bfb0c9a75f0e8c0404c387ec4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125c54a8fb6f4cc8f475f6c0c9679418a0960e4bfb0c9a75f0e8c0404c387ec4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:17:52 np0005481680 podman[260673]: 2025-10-12 21:17:52.50396084 +0000 UTC m=+0.170244065 container init 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:17:52 np0005481680 podman[260673]: 2025-10-12 21:17:52.512901944 +0000 UTC m=+0.179185169 container start 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:17:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:17:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:17:52 np0005481680 bash[260673]: 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:17:52 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:17:52 np0005481680 python3.9[260706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:17:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:17:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:53 np0005481680 python3.9[260874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760303871.998622-4753-172108814339008/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 12 17:17:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:54 np0005481680 python3.9[261028]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 12 17:17:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:54.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 12 17:17:55 np0005481680 python3.9[261180]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:17:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:17:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:17:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:17:56 np0005481680 podman[261306]: 2025-10-12 21:17:56.085229881 +0000 UTC m=+0.085677076 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 12 17:17:56 np0005481680 python3[261354]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:17:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:56.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:17:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:57.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:17:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:57.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:17:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:17:57.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:17:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:17:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:17:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:17:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:17:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:17:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:17:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:17:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:17:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:17:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:18:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:00.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:18:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211801 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:18:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:02.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:18:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:02] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:18:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:18:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:18:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:18:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:18:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:03 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:18:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:18:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:18:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:04.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:04 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 12 17:18:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 12 17:18:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 12 17:18:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 12 17:18:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 12 17:18:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 12 17:18:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:06 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 12 17:18:06 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 12 17:18:06 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 12 17:18:06 np0005481680 podman[261367]: 2025-10-12 21:18:06.206770749 +0000 UTC m=+9.729309341 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 12 17:18:06 np0005481680 podman[261487]: 2025-10-12 21:18:06.446050219 +0000 UTC m=+0.080553368 container create 47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 12 17:18:06 np0005481680 podman[261487]: 2025-10-12 21:18:06.40635851 +0000 UTC m=+0.040861729 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 12 17:18:06 np0005481680 python3[261354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 12 17:18:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:18:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:18:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:18:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 12 17:18:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:07.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:18:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:08 np0005481680 python3.9[261679]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:18:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 12 17:18:09 np0005481680 podman[261753]: 2025-10-12 21:18:09.04915188 +0000 UTC m=+0.129915229 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 12 17:18:09 np0005481680 podman[261825]: 2025-10-12 21:18:09.124795663 +0000 UTC m=+0.091695578 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:18:09 np0005481680 python3.9[261943]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:18:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:18:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:10.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:10 np0005481680 python3.9[262177]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.410934921 +0000 UTC m=+0.057673452 container create a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 12 17:18:10 np0005481680 systemd[1]: Started libpod-conmon-a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b.scope.
Oct 12 17:18:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:18:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.388309852 +0000 UTC m=+0.035048433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.497169861 +0000 UTC m=+0.143908422 container init a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.507709205 +0000 UTC m=+0.154447736 container start a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.510910097 +0000 UTC m=+0.157648678 container attach a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Oct 12 17:18:10 np0005481680 nervous_murdock[262243]: 167 167
Oct 12 17:18:10 np0005481680 systemd[1]: libpod-a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b.scope: Deactivated successfully.
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.514540187 +0000 UTC m=+0.161278758 container died a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:18:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-27effb56de32785009de7577e3403d44d774ed3139e0c0d4d93c536397ead234-merged.mount: Deactivated successfully.
Oct 12 17:18:10 np0005481680 podman[262205]: 2025-10-12 21:18:10.565311685 +0000 UTC m=+0.212050246 container remove a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:18:10 np0005481680 systemd[1]: libpod-conmon-a27381a461bcc16bcd8df0fb822b0c55ffa5eda9ed156b3bb7a066514b97038b.scope: Deactivated successfully.
Oct 12 17:18:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 682 B/s wr, 151 op/s
Oct 12 17:18:10 np0005481680 podman[262271]: 2025-10-12 21:18:10.771911453 +0000 UTC m=+0.057160440 container create 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:18:10 np0005481680 systemd[1]: Started libpod-conmon-922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6.scope.
Oct 12 17:18:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:10 np0005481680 podman[262271]: 2025-10-12 21:18:10.742276647 +0000 UTC m=+0.027525714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:10 np0005481680 podman[262271]: 2025-10-12 21:18:10.872833462 +0000 UTC m=+0.158082459 container init 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:18:10 np0005481680 podman[262271]: 2025-10-12 21:18:10.889206934 +0000 UTC m=+0.174455951 container start 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:18:10 np0005481680 podman[262271]: 2025-10-12 21:18:10.896133538 +0000 UTC m=+0.181382545 container attach 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:18:11 np0005481680 sweet_banach[262288]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:18:11 np0005481680 sweet_banach[262288]: --> All data devices are unavailable
Oct 12 17:18:11 np0005481680 systemd[1]: libpod-922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6.scope: Deactivated successfully.
Oct 12 17:18:11 np0005481680 podman[262271]: 2025-10-12 21:18:11.287154016 +0000 UTC m=+0.572403083 container died 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:18:11 np0005481680 systemd[1]: var-lib-containers-storage-overlay-62f9cddbd4f3a4a744a02910d40b0d66c4165a51685662f0fbea9cdd4b99049a-merged.mount: Deactivated successfully.
Oct 12 17:18:11 np0005481680 podman[262271]: 2025-10-12 21:18:11.35727683 +0000 UTC m=+0.642525847 container remove 922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:18:11 np0005481680 systemd[1]: libpod-conmon-922d34c89e5a8a52b75dbd5b4896f5e7585dbae9526a0a4991c8dc219d2044d6.scope: Deactivated successfully.
Oct 12 17:18:11 np0005481680 python3[262431]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 12 17:18:11 np0005481680 podman[262535]: 2025-10-12 21:18:11.928799309 +0000 UTC m=+0.096522650 container create 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 12 17:18:11 np0005481680 podman[262535]: 2025-10-12 21:18:11.858314656 +0000 UTC m=+0.026038017 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 12 17:18:11 np0005481680 python3[262431]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 12 17:18:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:12] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Oct 12 17:18:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.155438391 +0000 UTC m=+0.076060385 container create 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:18:12 np0005481680 systemd[1]: Started libpod-conmon-4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047.scope.
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.123042916 +0000 UTC m=+0.043664940 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:12 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.271783978 +0000 UTC m=+0.192405992 container init 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.283050621 +0000 UTC m=+0.203672615 container start 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.286910109 +0000 UTC m=+0.207532173 container attach 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:18:12 np0005481680 upbeat_leavitt[262649]: 167 167
Oct 12 17:18:12 np0005481680 systemd[1]: libpod-4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047.scope: Deactivated successfully.
Oct 12 17:18:12 np0005481680 conmon[262649]: conmon 4fb4ad90ff24d2f46a69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047.scope/container/memory.events
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.293845723 +0000 UTC m=+0.214467747 container died 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:18:12 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3470c4b54f315326dabe5298cc9f2f0bfaa4a8b123b13f69f56c3e0640d95f84-merged.mount: Deactivated successfully.
Oct 12 17:18:12 np0005481680 podman[262608]: 2025-10-12 21:18:12.340498707 +0000 UTC m=+0.261120721 container remove 4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:18:12 np0005481680 systemd[1]: libpod-conmon-4fb4ad90ff24d2f46a69dce6a1817a808ff16e5d038e967b5c0805120dbb0047.scope: Deactivated successfully.
Oct 12 17:18:12 np0005481680 podman[262724]: 2025-10-12 21:18:12.509000267 +0000 UTC m=+0.052361489 container create 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:18:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:12 np0005481680 systemd[1]: Started libpod-conmon-584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613.scope.
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:18:12 np0005481680 podman[262724]: 2025-10-12 21:18:12.483885455 +0000 UTC m=+0.027246767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:12 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:18:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417bdf0bfad315f5ca3c3dc0ef8d12cefc92d21bbfb2410decc619ffd610772c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:18:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417bdf0bfad315f5ca3c3dc0ef8d12cefc92d21bbfb2410decc619ffd610772c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417bdf0bfad315f5ca3c3dc0ef8d12cefc92d21bbfb2410decc619ffd610772c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417bdf0bfad315f5ca3c3dc0ef8d12cefc92d21bbfb2410decc619ffd610772c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:18:12 np0005481680 podman[262724]: 2025-10-12 21:18:12.618426749 +0000 UTC m=+0.161788001 container init 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:18:12 np0005481680 podman[262724]: 2025-10-12 21:18:12.637300134 +0000 UTC m=+0.180661396 container start 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:18:12 np0005481680 podman[262724]: 2025-10-12 21:18:12.641005077 +0000 UTC m=+0.184366299 container attach 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:18:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 12 17:18:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]: {
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:    "0": [
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:        {
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "devices": [
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "/dev/loop3"
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            ],
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "lv_name": "ceph_lv0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "lv_size": "21470642176",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "name": "ceph_lv0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "tags": {
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.cluster_name": "ceph",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.crush_device_class": "",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.encrypted": "0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.osd_id": "0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.type": "block",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.vdo": "0",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:                "ceph.with_tpm": "0"
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            },
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "type": "block",
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:            "vg_name": "ceph_vg0"
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:        }
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]:    ]
Oct 12 17:18:12 np0005481680 zealous_taussig[262769]: }
Oct 12 17:18:12 np0005481680 systemd[1]: libpod-584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613.scope: Deactivated successfully.
Oct 12 17:18:13 np0005481680 python3.9[262833]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:18:13 np0005481680 podman[262724]: 2025-10-12 21:18:13.001889327 +0000 UTC m=+0.545250589 container died 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:18:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-417bdf0bfad315f5ca3c3dc0ef8d12cefc92d21bbfb2410decc619ffd610772c-merged.mount: Deactivated successfully.
Oct 12 17:18:13 np0005481680 podman[262724]: 2025-10-12 21:18:13.274766363 +0000 UTC m=+0.818127605 container remove 584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:18:13 np0005481680 systemd[1]: libpod-conmon-584ce104c40ff42b7366e1a9faceeb4d58e87781c4906b46aab8428ce4f52613.scope: Deactivated successfully.
Oct 12 17:18:14 np0005481680 python3.9[263073]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:18:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.099917402 +0000 UTC m=+0.120045381 container create 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.017982271 +0000 UTC m=+0.038110330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:14 np0005481680 systemd[1]: Started libpod-conmon-6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133.scope.
Oct 12 17:18:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.23854932 +0000 UTC m=+0.258677329 container init 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.250418559 +0000 UTC m=+0.270546568 container start 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:18:14 np0005481680 zen_mcclintock[263152]: 167 167
Oct 12 17:18:14 np0005481680 systemd[1]: libpod-6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133.scope: Deactivated successfully.
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.260607695 +0000 UTC m=+0.280735684 container attach 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.260939723 +0000 UTC m=+0.281067702 container died 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:18:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-344ff9bb0454833d942457280ac4d05ce15b634929f17abac2f2d0e0494226de-merged.mount: Deactivated successfully.
Oct 12 17:18:14 np0005481680 podman[263100]: 2025-10-12 21:18:14.508697767 +0000 UTC m=+0.528825746 container remove 6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcclintock, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:18:14 np0005481680 systemd[1]: libpod-conmon-6fba2428be6cb86ce7f82380633b4b9dbf5d10027c4764864135f6387da16133.scope: Deactivated successfully.
Oct 12 17:18:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:14.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.1 KiB/s wr, 153 op/s
Oct 12 17:18:14 np0005481680 podman[263292]: 2025-10-12 21:18:14.796474247 +0000 UTC m=+0.084689902 container create aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:18:14 np0005481680 python3.9[263286]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760303894.0993729-5029-119304125218016/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 12 17:18:14 np0005481680 podman[263292]: 2025-10-12 21:18:14.751719791 +0000 UTC m=+0.039935496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:18:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:14 np0005481680 systemd[1]: Started libpod-conmon-aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1.scope.
Oct 12 17:18:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64c7930fdada6aab05e7538d0a0acdfa05e7164e8dae8144143f9a6bb570b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64c7930fdada6aab05e7538d0a0acdfa05e7164e8dae8144143f9a6bb570b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64c7930fdada6aab05e7538d0a0acdfa05e7164e8dae8144143f9a6bb570b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64c7930fdada6aab05e7538d0a0acdfa05e7164e8dae8144143f9a6bb570b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:15 np0005481680 podman[263292]: 2025-10-12 21:18:15.00170383 +0000 UTC m=+0.289919465 container init aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:18:15 np0005481680 podman[263292]: 2025-10-12 21:18:15.014608005 +0000 UTC m=+0.302823640 container start aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:18:15 np0005481680 podman[263292]: 2025-10-12 21:18:15.084725909 +0000 UTC m=+0.372941534 container attach aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:18:15 np0005481680 python3.9[263389]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 12 17:18:15 np0005481680 systemd[1]: Reloading.
Oct 12 17:18:15 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:18:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:15 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:18:15 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:18:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:15 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:18:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:15 np0005481680 quizzical_morse[263329]: {}
Oct 12 17:18:15 np0005481680 podman[263292]: 2025-10-12 21:18:15.859215185 +0000 UTC m=+1.147430800 container died aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:18:15 np0005481680 systemd[1]: libpod-aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1.scope: Deactivated successfully.
Oct 12 17:18:15 np0005481680 systemd[1]: libpod-aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1.scope: Consumed 1.354s CPU time.
Oct 12 17:18:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:16 np0005481680 lvm[263512]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:18:16 np0005481680 lvm[263512]: VG ceph_vg0 finished
Oct 12 17:18:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b64c7930fdada6aab05e7538d0a0acdfa05e7164e8dae8144143f9a6bb570b37-merged.mount: Deactivated successfully.
Oct 12 17:18:16 np0005481680 lvm[263515]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:18:16 np0005481680 lvm[263515]: VG ceph_vg0 finished
Oct 12 17:18:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:16 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:16 np0005481680 podman[263292]: 2025-10-12 21:18:16.203162257 +0000 UTC m=+1.491377902 container remove aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:18:16 np0005481680 systemd[1]: libpod-conmon-aa5229e714e9b7dc68c1593af8398c576852e29bf8d3e128a1964982c5d9b6b1.scope: Deactivated successfully.
Oct 12 17:18:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:18:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:18:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211816 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:18:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:16 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:16 np0005481680 podman[263554]: 2025-10-12 21:18:16.3809432 +0000 UTC m=+0.123187840 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:18:16 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:16 np0005481680 python3.9[263608]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 12 17:18:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.1 KiB/s wr, 152 op/s
Oct 12 17:18:16 np0005481680 systemd[1]: Reloading.
Oct 12 17:18:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:16 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:16 np0005481680 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 12 17:18:16 np0005481680 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 12 17:18:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:17.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:18:17 np0005481680 systemd[1]: Starting nova_compute container...
Oct 12 17:18:17 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:17 np0005481680 podman[263674]: 2025-10-12 21:18:17.367798629 +0000 UTC m=+0.169792333 container init 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:18:17 np0005481680 podman[263674]: 2025-10-12 21:18:17.378730284 +0000 UTC m=+0.180723988 container start 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + sudo -E kolla_set_configs
Oct 12 17:18:17 np0005481680 podman[263674]: nova_compute
Oct 12 17:18:17 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:17 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:18:17 np0005481680 systemd[1]: Started nova_compute container.
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Validating config file
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying service configuration files
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Deleting /etc/ceph
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Creating directory /etc/ceph
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/ceph
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Writing out command to execute
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:17 np0005481680 nova_compute[263690]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 12 17:18:17 np0005481680 nova_compute[263690]: ++ cat /run_command
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + CMD=nova-compute
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + ARGS=
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + sudo kolla_copy_cacerts
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + [[ ! -n '' ]]
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + . kolla_extend_start
Oct 12 17:18:17 np0005481680 nova_compute[263690]: Running command: 'nova-compute'
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + echo 'Running command: '\''nova-compute'\'''
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + umask 0022
Oct 12 17:18:17 np0005481680 nova_compute[263690]: + exec nova-compute
Oct 12 17:18:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:18.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:18 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:18:18
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', '.nfs']
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:18:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:18:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:18:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:18 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:18:18.352 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:18:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:18:18.353 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:18:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:18:18.353 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:18.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:18:18 np0005481680 python3.9[263852]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:18:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 1.1 KiB/s wr, 152 op/s
Oct 12 17:18:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:18 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:18:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:18 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:19 np0005481680 python3.9[264004]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:18:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:20.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:20 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:20 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:20.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:20 np0005481680 python3.9[264180]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 12 17:18:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 1.3 KiB/s wr, 154 op/s
Oct 12 17:18:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:20 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211821 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:18:21 np0005481680 python3.9[264333]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 12 17:18:21 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:18:21 np0005481680 nova_compute[263690]: 2025-10-12 21:18:21.861 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:21 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:18:21 np0005481680 nova_compute[263690]: 2025-10-12 21:18:21.861 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:21 np0005481680 nova_compute[263690]: 2025-10-12 21:18:21.862 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:21 np0005481680 nova_compute[263690]: 2025-10-12 21:18:21.862 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 12 17:18:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:22] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:18:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:22] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:18:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:22.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:22 np0005481680 nova_compute[263690]: 2025-10-12 21:18:22.133 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:18:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:22 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:22 np0005481680 nova_compute[263690]: 2025-10-12 21:18:22.180 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:18:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:22 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:22.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:22 np0005481680 nova_compute[263690]: 2025-10-12 21:18:22.767 2 INFO nova.virt.driver [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 12 17:18:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:18:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:22 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:22 np0005481680 nova_compute[263690]: 2025-10-12 21:18:22.998 2 INFO nova.compute.provider_config [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.016 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.017 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.017 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.018 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.018 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.018 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.018 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.019 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.019 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.019 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.019 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.019 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.020 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.020 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.020 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.020 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.020 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.021 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.021 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.021 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.021 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.021 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.022 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.022 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.022 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.022 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.023 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.023 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.023 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.023 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.023 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.024 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.024 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.024 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.024 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.025 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.025 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.025 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.025 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.025 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.026 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.026 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.026 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.026 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.027 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.027 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.027 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.027 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.027 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.028 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.028 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.028 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.028 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.029 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.029 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.029 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.029 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.030 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.030 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.030 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.030 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.030 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.031 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.031 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.031 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.031 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.031 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.032 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.032 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.032 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.032 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.032 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.033 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.033 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.033 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.033 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.033 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.034 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.034 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.034 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.034 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.034 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.035 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.035 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.035 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.035 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.035 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.036 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.037 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.037 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.037 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.037 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.037 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.038 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.038 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.038 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.038 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.038 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.039 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.039 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.039 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.039 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.040 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.041 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.041 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.041 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.041 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.041 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.042 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.042 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.042 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.042 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.042 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.043 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.043 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.043 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.043 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.043 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.044 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.044 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.044 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.044 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.044 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.045 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.045 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.045 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.045 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.045 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.046 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.046 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.046 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.046 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.046 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.047 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.047 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.047 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.047 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.047 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.048 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.048 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.048 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.048 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.048 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.049 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.049 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.049 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.049 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.049 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.050 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.050 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.050 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.050 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.051 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.052 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.053 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.054 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.055 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.056 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.057 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.058 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.059 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.060 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.061 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.062 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.063 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.064 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.065 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.066 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.067 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.068 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.069 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.070 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.071 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.072 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.073 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.074 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.075 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.076 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.077 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.078 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.079 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.080 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.081 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.082 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.083 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.084 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.085 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.086 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.087 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.088 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.089 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.090 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.091 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.092 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.093 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.094 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.095 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.096 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.097 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.098 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 WARNING oslo_config.cfg [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 12 17:18:23 np0005481680 nova_compute[263690]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 12 17:18:23 np0005481680 nova_compute[263690]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 12 17:18:23 np0005481680 nova_compute[263690]: and ``live_migration_inbound_addr`` respectively.
Oct 12 17:18:23 np0005481680 nova_compute[263690]: ).  Its value may be silently ignored in the future.#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.099 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.100 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.101 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rbd_secret_uuid        = 5adb8c35-1b74-5730-a252-62321f654cd5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.102 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.103 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.104 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.104 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.104 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.104 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.104 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.105 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.106 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.107 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.108 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.109 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.110 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.111 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.112 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.113 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.113 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.113 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.113 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.113 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.114 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.115 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.116 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.117 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.118 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.119 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.120 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.121 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.122 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.123 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.124 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.125 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.126 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.127 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.128 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.129 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.130 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.131 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.132 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.133 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.134 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.135 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.136 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.137 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.138 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.139 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.140 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.141 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.142 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.143 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.144 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.145 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.146 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.147 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.148 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.149 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.150 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.151 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.152 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.153 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.154 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.155 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.156 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.157 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.158 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.159 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.160 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.161 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.162 2 DEBUG oslo_service.service [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.163 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.181 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.182 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.182 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.182 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 12 17:18:23 np0005481680 systemd[1]: Starting libvirt QEMU daemon...
Oct 12 17:18:23 np0005481680 systemd[1]: Started libvirt QEMU daemon.
Oct 12 17:18:23 np0005481680 python3.9[264514]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.290 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbdd34e0460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.293 2 DEBUG nova.virt.libvirt.host [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbdd34e0460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.294 2 INFO nova.virt.libvirt.driver [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.313 2 WARNING nova.virt.libvirt.driver [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.313 2 DEBUG nova.virt.libvirt.volume.mount [None req-253fb3b6-454c-4a55-b036-50665c85f6bf - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 12 17:18:23 np0005481680 systemd[1]: Stopping nova_compute container...
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.379 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.379 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:18:23 np0005481680 nova_compute[263690]: 2025-10-12 21:18:23.380 2 DEBUG oslo_concurrency.lockutils [None req-2afebf06-f0c2-48ac-b051-8adb1e7cdad8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:18:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:18:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:24.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:18:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:24 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:24 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4001480 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:18:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:24 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:25 np0005481680 virtqemud[264537]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 12 17:18:25 np0005481680 virtqemud[264537]: hostname: compute-0
Oct 12 17:18:25 np0005481680 virtqemud[264537]: End of file while reading data: Input/output error
Oct 12 17:18:25 np0005481680 systemd[1]: libpod-6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b.scope: Deactivated successfully.
Oct 12 17:18:25 np0005481680 systemd[1]: libpod-6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b.scope: Consumed 3.653s CPU time.
Oct 12 17:18:25 np0005481680 podman[264561]: 2025-10-12 21:18:25.15751241 +0000 UTC m=+1.829065569 container died 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:18:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b-userdata-shm.mount: Deactivated successfully.
Oct 12 17:18:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay-03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7-merged.mount: Deactivated successfully.
Oct 12 17:18:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:26 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4001480 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:26 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:26.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:26 np0005481680 podman[264612]: 2025-10-12 21:18:26.61549607 +0000 UTC m=+0.076096255 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:18:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Oct 12 17:18:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:26 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4001480 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:27.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:18:27 np0005481680 podman[264561]: 2025-10-12 21:18:27.32256149 +0000 UTC m=+3.994114669 container cleanup 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:18:27 np0005481680 podman[264561]: nova_compute
Oct 12 17:18:27 np0005481680 podman[264635]: nova_compute
Oct 12 17:18:27 np0005481680 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 12 17:18:27 np0005481680 systemd[1]: Stopped nova_compute container.
Oct 12 17:18:27 np0005481680 systemd[1]: Starting nova_compute container...
Oct 12 17:18:27 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:27 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03daee7d50927595a4e93925c80dbc2dad98f0389b2323084ca1db36fd368cb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:27 np0005481680 podman[264648]: 2025-10-12 21:18:27.592911441 +0000 UTC m=+0.123001466 container init 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Oct 12 17:18:27 np0005481680 podman[264648]: 2025-10-12 21:18:27.599709012 +0000 UTC m=+0.129799007 container start 6e50f659177ab42f4a899a3822b6a5ae9168936c28d08e0680cdd08131466c5b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true)
Oct 12 17:18:27 np0005481680 podman[264648]: nova_compute
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + sudo -E kolla_set_configs
Oct 12 17:18:27 np0005481680 systemd[1]: Started nova_compute container.
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Validating config file
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying service configuration files
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /etc/ceph
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Creating directory /etc/ceph
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/ceph
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Writing out command to execute
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:27 np0005481680 nova_compute[264665]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 12 17:18:27 np0005481680 nova_compute[264665]: ++ cat /run_command
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + CMD=nova-compute
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + ARGS=
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + sudo kolla_copy_cacerts
Oct 12 17:18:27 np0005481680 nova_compute[264665]: Running command: 'nova-compute'
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + [[ ! -n '' ]]
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + . kolla_extend_start
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + echo 'Running command: '\''nova-compute'\'''
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + umask 0022
Oct 12 17:18:27 np0005481680 nova_compute[264665]: + exec nova-compute
Oct 12 17:18:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:28.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:28 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:28 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Oct 12 17:18:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:28 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.501 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.501 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.502 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.502 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.625 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:18:29 np0005481680 nova_compute[264665]: 2025-10-12 21:18:29.654 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:18:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:30.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.074 2 INFO nova.virt.driver [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 12 17:18:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:30 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f40028a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.208 2 INFO nova.compute.provider_config [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.221 2 DEBUG oslo_concurrency.lockutils [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_concurrency.lockutils [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_concurrency.lockutils [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.222 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.223 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.224 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.225 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.226 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.227 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.228 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.229 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.230 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.231 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.232 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.233 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.234 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.237 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.237 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.237 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.237 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.238 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.239 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.240 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.241 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.242 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.243 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.244 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.245 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.246 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.247 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.248 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.249 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.250 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.251 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.252 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.253 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.254 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.255 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.256 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.257 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.258 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.259 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.260 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.261 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.262 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.263 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.264 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.265 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.266 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.267 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.268 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.269 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.270 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.271 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.272 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.272 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.272 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.272 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.272 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.273 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.274 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.275 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.276 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.276 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.276 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.276 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.276 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.277 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.278 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.279 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.280 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.281 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.282 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.283 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.284 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.285 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.286 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.287 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.288 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.289 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.290 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.291 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.292 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.293 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.294 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.295 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 WARNING oslo_config.cfg [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 12 17:18:30 np0005481680 nova_compute[264665]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 12 17:18:30 np0005481680 nova_compute[264665]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 12 17:18:30 np0005481680 nova_compute[264665]: and ``live_migration_inbound_addr`` respectively.
Oct 12 17:18:30 np0005481680 nova_compute[264665]: ).  Its value may be silently ignored in the future.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.296 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.297 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.298 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rbd_secret_uuid        = 5adb8c35-1b74-5730-a252-62321f654cd5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.299 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.300 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.301 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.302 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.303 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.304 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.305 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.306 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.307 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.308 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.309 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.310 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.311 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.312 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.313 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.314 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.315 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.316 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.317 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.318 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.319 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.320 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.321 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.322 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.323 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.324 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.325 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.326 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.327 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.328 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.329 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.330 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.331 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.332 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.333 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.334 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.335 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.336 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.337 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.338 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.339 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.340 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.341 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.342 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:30 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.343 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.344 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.345 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.346 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.347 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.348 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.349 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.350 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.351 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.352 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.353 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.354 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.355 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.356 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.357 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.358 2 DEBUG oslo_service.service [None req-2287bddd-e7b0-432b-ae47-9a05968f92c3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.359 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.370 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.371 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.371 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.371 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.385 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6fb2aab160> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.388 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6fb2aab160> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.389 2 INFO nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.399 2 INFO nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Libvirt host capabilities <capabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <host>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <uuid>7d715b3e-003b-4a6c-84d2-be911b9b9ce7</uuid>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <arch>x86_64</arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model>EPYC-Rome-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <vendor>AMD</vendor>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <microcode version='16777317'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <signature family='23' model='49' stepping='0'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='x2apic'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='tsc-deadline'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='osxsave'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='hypervisor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='tsc_adjust'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='spec-ctrl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='stibp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='arch-capabilities'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='cmp_legacy'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='topoext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='virt-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='lbrv'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='tsc-scale'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='vmcb-clean'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='pause-filter'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='pfthreshold'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='svme-addr-chk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='rdctl-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='skip-l1dfl-vmentry'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='mds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature name='pschange-mc-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <pages unit='KiB' size='4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <pages unit='KiB' size='2048'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <pages unit='KiB' size='1048576'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <power_management>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <suspend_mem/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </power_management>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <iommu support='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <migration_features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <live/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <uri_transports>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <uri_transport>tcp</uri_transport>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <uri_transport>rdma</uri_transport>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </uri_transports>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </migration_features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <topology>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <cells num='1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <cell id='0'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <memory unit='KiB'>7864356</memory>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <pages unit='KiB' size='2048'>0</pages>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <distances>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <sibling id='0' value='10'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          </distances>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          <cpus num='8'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:          </cpus>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        </cell>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </cells>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </topology>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <cache>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </cache>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <secmodel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model>selinux</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <doi>0</doi>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </secmodel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <secmodel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model>dac</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <doi>0</doi>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </secmodel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </host>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <guest>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <os_type>hvm</os_type>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <arch name='i686'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <wordsize>32</wordsize>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <domain type='qemu'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <domain type='kvm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <pae/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <nonpae/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <acpi default='on' toggle='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <apic default='on' toggle='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <cpuselection/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <deviceboot/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <disksnapshot default='on' toggle='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <externalSnapshot/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </guest>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <guest>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <os_type>hvm</os_type>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <arch name='x86_64'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <wordsize>64</wordsize>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <domain type='qemu'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <domain type='kvm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <acpi default='on' toggle='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <apic default='on' toggle='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <cpuselection/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <deviceboot/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <disksnapshot default='on' toggle='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <externalSnapshot/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </guest>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 
Oct 12 17:18:30 np0005481680 nova_compute[264665]: </capabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: #033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.400 2 WARNING nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.400 2 DEBUG nova.virt.libvirt.volume.mount [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.407 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.457 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 12 17:18:30 np0005481680 nova_compute[264665]: <domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <path>/usr/libexec/qemu-kvm</path>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <domain>kvm</domain>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <arch>i686</arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <vcpu max='4096'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <iothreads supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <os supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='firmware'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <loader supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>rom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pflash</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='readonly'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>yes</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='secure'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </loader>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-passthrough' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='hostPassthroughMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='maximum' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='maximumMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-model' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <vendor>AMD</vendor>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='x2apic'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-deadline'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='hypervisor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc_adjust'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='spec-ctrl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='stibp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='arch-capabilities'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='cmp_legacy'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='overflow-recov'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='succor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='amd-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='virt-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lbrv'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-scale'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='vmcb-clean'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='flushbyasid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pause-filter'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pfthreshold'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='svme-addr-chk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rdctl-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='mds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='gds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rfds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='disable' name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='custom' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Dhyana-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-128'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-256'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-512'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v6'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v7'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <memoryBacking supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='sourceType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>file</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>anonymous</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>memfd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </memoryBacking>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <disk supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='diskDevice'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>disk</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cdrom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>floppy</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>lun</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>fdc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>sata</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <graphics supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vnc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egl-headless</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>dbus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <video supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='modelType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vga</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cirrus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>none</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>bochs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ramfb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hostdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='mode'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>subsystem</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='startupPolicy'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>mandatory</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>requisite</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>optional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='subsysType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pci</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='capsType'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='pciBackend'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hostdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <rng supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>random</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <filesystem supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='driverType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>path</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>handle</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtiofs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </filesystem>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <tpm supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-tis</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-crb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emulator</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>external</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendVersion'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>2.0</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </tpm>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <redirdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </redirdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <channel supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pty</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>unix</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </channel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <crypto supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>qemu</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </crypto>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <interface supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>passt</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <panic supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>isa</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>hyperv</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </panic>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <gic supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <vmcoreinfo supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <genid supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backingStoreInput supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backup supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <async-teardown supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <ps2 supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sev supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sgx supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hyperv supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='features'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>relaxed</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vapic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>spinlocks</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vpindex</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>runtime</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>synic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>stimer</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reset</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vendor_id</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>frequencies</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reenlightenment</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tlbflush</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ipi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>avic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emsr_bitmap</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>xmm_input</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hyperv>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <launchSecurity supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: </domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.465 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 12 17:18:30 np0005481680 nova_compute[264665]: <domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <path>/usr/libexec/qemu-kvm</path>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <domain>kvm</domain>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <arch>i686</arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <vcpu max='240'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <iothreads supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <os supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='firmware'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <loader supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>rom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pflash</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='readonly'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>yes</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='secure'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </loader>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-passthrough' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='hostPassthroughMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='maximum' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='maximumMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-model' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <vendor>AMD</vendor>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='x2apic'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-deadline'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='hypervisor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc_adjust'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='spec-ctrl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='stibp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='arch-capabilities'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='cmp_legacy'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='overflow-recov'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='succor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='amd-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='virt-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lbrv'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-scale'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='vmcb-clean'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='flushbyasid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pause-filter'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pfthreshold'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='svme-addr-chk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rdctl-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='mds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='gds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rfds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='disable' name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='custom' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Dhyana-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-128'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-256'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-512'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v6'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v7'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <memoryBacking supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='sourceType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>file</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>anonymous</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>memfd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </memoryBacking>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <disk supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='diskDevice'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>disk</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cdrom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>floppy</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>lun</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ide</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>fdc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>sata</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <graphics supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vnc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egl-headless</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>dbus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <video supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='modelType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vga</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cirrus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>none</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>bochs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ramfb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hostdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='mode'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>subsystem</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='startupPolicy'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>mandatory</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>requisite</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>optional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='subsysType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pci</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='capsType'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='pciBackend'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hostdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <rng supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>random</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <filesystem supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='driverType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>path</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>handle</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtiofs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </filesystem>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <tpm supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-tis</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-crb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emulator</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>external</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendVersion'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>2.0</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </tpm>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <redirdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </redirdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <channel supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pty</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>unix</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </channel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <crypto supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>qemu</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </crypto>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <interface supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>passt</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <panic supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>isa</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>hyperv</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </panic>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <gic supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <vmcoreinfo supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <genid supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backingStoreInput supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backup supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <async-teardown supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <ps2 supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sev supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sgx supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hyperv supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='features'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>relaxed</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vapic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>spinlocks</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vpindex</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>runtime</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>synic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>stimer</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reset</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vendor_id</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>frequencies</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reenlightenment</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tlbflush</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ipi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>avic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emsr_bitmap</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>xmm_input</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hyperv>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <launchSecurity supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: </domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.510 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.516 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 12 17:18:30 np0005481680 nova_compute[264665]: <domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <path>/usr/libexec/qemu-kvm</path>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <domain>kvm</domain>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <arch>x86_64</arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <vcpu max='4096'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <iothreads supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <os supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='firmware'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>efi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <loader supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>rom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pflash</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='readonly'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>yes</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='secure'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>yes</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </loader>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-passthrough' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='hostPassthroughMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='maximum' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='maximumMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-model' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <vendor>AMD</vendor>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='x2apic'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-deadline'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='hypervisor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc_adjust'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='spec-ctrl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='stibp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='arch-capabilities'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='cmp_legacy'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='overflow-recov'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='succor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='amd-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='virt-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lbrv'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-scale'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='vmcb-clean'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='flushbyasid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pause-filter'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pfthreshold'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='svme-addr-chk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rdctl-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='mds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='gds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rfds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='disable' name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='custom' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Dhyana-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-128'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-256'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-512'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v6'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v7'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <memoryBacking supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='sourceType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>file</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>anonymous</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>memfd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </memoryBacking>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <disk supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='diskDevice'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>disk</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cdrom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>floppy</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>lun</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>fdc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>sata</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <graphics supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vnc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egl-headless</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>dbus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <video supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='modelType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vga</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cirrus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>none</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>bochs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ramfb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hostdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='mode'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>subsystem</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='startupPolicy'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>mandatory</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>requisite</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>optional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='subsysType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pci</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='capsType'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='pciBackend'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hostdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <rng supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>random</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <filesystem supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='driverType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>path</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>handle</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtiofs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </filesystem>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <tpm supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-tis</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-crb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emulator</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>external</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendVersion'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>2.0</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </tpm>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <redirdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </redirdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <channel supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pty</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>unix</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </channel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <crypto supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>qemu</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </crypto>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <interface supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>passt</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <panic supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>isa</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>hyperv</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </panic>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <gic supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <vmcoreinfo supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <genid supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backingStoreInput supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backup supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <async-teardown supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <ps2 supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sev supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sgx supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hyperv supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='features'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>relaxed</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vapic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>spinlocks</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vpindex</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>runtime</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>synic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>stimer</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reset</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vendor_id</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>frequencies</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reenlightenment</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tlbflush</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ipi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>avic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emsr_bitmap</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>xmm_input</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hyperv>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <launchSecurity supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: </domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.576 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 12 17:18:30 np0005481680 nova_compute[264665]: <domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <path>/usr/libexec/qemu-kvm</path>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <domain>kvm</domain>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <arch>x86_64</arch>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <vcpu max='240'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <iothreads supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <os supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='firmware'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <loader supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>rom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pflash</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='readonly'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>yes</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='secure'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>no</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </loader>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-passthrough' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='hostPassthroughMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='maximum' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='maximumMigratable'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>on</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>off</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='host-model' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <vendor>AMD</vendor>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='x2apic'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-deadline'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='hypervisor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc_adjust'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='spec-ctrl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='stibp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='arch-capabilities'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='cmp_legacy'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='overflow-recov'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='succor'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='amd-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='virt-ssbd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lbrv'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='tsc-scale'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='vmcb-clean'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='flushbyasid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pause-filter'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pfthreshold'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='svme-addr-chk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rdctl-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='mds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='gds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='require' name='rfds-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <feature policy='disable' name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <mode name='custom' supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Broadwell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cascadelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Cooperlake-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Denverton-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Dhyana-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Genoa-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='auto-ibrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Milan-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amd-psfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='no-nested-data-bp'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='null-sel-clr-base'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='stibp-always-on'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-Rome-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='EPYC-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='GraniteRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-128'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-256'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx10-512'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='prefetchiti'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Haswell-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-noTSX'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v6'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Icelake-Server-v7'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='IvyBridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='KnightsMill-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4fmaps'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-4vnniw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512er'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512pf'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G4-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Opteron_G5-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fma4'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tbm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xop'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SapphireRapids-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='amx-tile'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-bf16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-fp16'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512-vpopcntdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bitalg'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vbmi2'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrc'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fzrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='la57'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='taa-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='tsx-ldtrk'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xfd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='SierraForest-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ifma'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-ne-convert'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx-vnni-int8'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='bus-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cmpccxadd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fbsdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='fsrs'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ibrs-all'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mcdt-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pbrsb-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='psdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='sbdr-ssdp-no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='serialize'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vaes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='vpclmulqdq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Client-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='hle'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='rtm'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Skylake-Server-v5'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512bw'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512cd'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512dq'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512f'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='avx512vl'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='invpcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pcid'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='pku'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='mpx'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v2'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v3'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='core-capability'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='split-lock-detect'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='Snowridge-v4'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='cldemote'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='erms'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='gfni'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdir64b'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='movdiri'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='xsaves'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='athlon-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='core2duo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='coreduo-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='n270-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='ss'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <blockers model='phenom-v1'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnow'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <feature name='3dnowext'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </blockers>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </mode>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <memoryBacking supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <enum name='sourceType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>file</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>anonymous</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <value>memfd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </memoryBacking>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <disk supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='diskDevice'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>disk</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cdrom</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>floppy</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>lun</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ide</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>fdc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>sata</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <graphics supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vnc</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egl-headless</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>dbus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <video supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='modelType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vga</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>cirrus</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>none</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>bochs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ramfb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hostdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='mode'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>subsystem</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='startupPolicy'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>mandatory</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>requisite</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>optional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='subsysType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pci</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>scsi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='capsType'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='pciBackend'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hostdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <rng supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtio-non-transitional</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>random</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>egd</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <filesystem supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='driverType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>path</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>handle</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>virtiofs</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </filesystem>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <tpm supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-tis</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tpm-crb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emulator</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>external</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendVersion'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>2.0</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </tpm>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <redirdev supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='bus'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>usb</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </redirdev>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <channel supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>pty</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>unix</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </channel>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <crypto supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='type'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>qemu</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendModel'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>builtin</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </crypto>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <interface supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='backendType'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>default</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>passt</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <panic supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='model'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>isa</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>hyperv</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </panic>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <gic supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <vmcoreinfo supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <genid supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backingStoreInput supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <backup supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <async-teardown supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <ps2 supported='yes'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sev supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <sgx supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <hyperv supported='yes'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      <enum name='features'>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>relaxed</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vapic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>spinlocks</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vpindex</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>runtime</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>synic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>stimer</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reset</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>vendor_id</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>frequencies</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>reenlightenment</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>tlbflush</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>ipi</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>avic</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>emsr_bitmap</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:        <value>xmm_input</value>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:      </enum>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    </hyperv>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:    <launchSecurity supported='no'/>
Oct 12 17:18:30 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: </domainCapabilities>
Oct 12 17:18:30 np0005481680 nova_compute[264665]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.653 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.654 2 INFO nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Secure Boot support detected#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.656 2 INFO nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.656 2 INFO nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.669 2 DEBUG nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.695 2 INFO nova.virt.node [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Determined node identity d63acd5d-c9c0-44fc-813b-0eadb368ddab from /var/lib/nova/compute_id#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.708 2 WARNING nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Compute nodes ['d63acd5d-c9c0-44fc-813b-0eadb368ddab'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.730 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 12 17:18:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.757 2 WARNING nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.757 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.757 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.758 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.758 2 DEBUG nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:18:30 np0005481680 nova_compute[264665]: 2025-10-12 21:18:30.759 2 DEBUG oslo_concurrency.processutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:18:30 np0005481680 python3.9[264862]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 12 17:18:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 255 B/s wr, 1 op/s
Oct 12 17:18:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:30 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:31 np0005481680 systemd[1]: Started libpod-conmon-47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13.scope.
Oct 12 17:18:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:18:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5f924a914265f3d64b22e8b3bf8a694fee1b6fad065f451420dd53dc17a7af/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5f924a914265f3d64b22e8b3bf8a694fee1b6fad065f451420dd53dc17a7af/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5f924a914265f3d64b22e8b3bf8a694fee1b6fad065f451420dd53dc17a7af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 12 17:18:31 np0005481680 podman[264889]: 2025-10-12 21:18:31.134858893 +0000 UTC m=+0.235675271 container init 47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001)
Oct 12 17:18:31 np0005481680 podman[264889]: 2025-10-12 21:18:31.146006134 +0000 UTC m=+0.246822472 container start 47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 12 17:18:31 np0005481680 python3.9[264862]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Applying nova statedir ownership
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 12 17:18:31 np0005481680 nova_compute_init[264928]: INFO:nova_statedir:Nova statedir ownership complete
Oct 12 17:18:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:18:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530495976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:18:31 np0005481680 systemd[1]: libpod-47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13.scope: Deactivated successfully.
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.251 2 DEBUG oslo_concurrency.processutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:18:31 np0005481680 podman[264948]: 2025-10-12 21:18:31.276581749 +0000 UTC m=+0.028416406 container died 47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 12 17:18:31 np0005481680 systemd[1]: Starting libvirt nodedev daemon...
Oct 12 17:18:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13-userdata-shm.mount: Deactivated successfully.
Oct 12 17:18:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bf5f924a914265f3d64b22e8b3bf8a694fee1b6fad065f451420dd53dc17a7af-merged.mount: Deactivated successfully.
Oct 12 17:18:31 np0005481680 podman[264948]: 2025-10-12 21:18:31.336994639 +0000 UTC m=+0.088829296 container cleanup 47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 12 17:18:31 np0005481680 systemd[1]: libpod-conmon-47c7a148790d98a98089d605552e1d670504bd909b9b89eca8afd88d41716e13.scope: Deactivated successfully.
Oct 12 17:18:31 np0005481680 systemd[1]: Started libvirt nodedev daemon.
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.672 2 WARNING nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.674 2 DEBUG nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.674 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.674 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.686 2 WARNING nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] No compute node record for compute-0.ctlplane.example.com:d63acd5d-c9c0-44fc-813b-0eadb368ddab: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d63acd5d-c9c0-44fc-813b-0eadb368ddab could not be found.#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.702 2 INFO nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: d63acd5d-c9c0-44fc-813b-0eadb368ddab#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.745 2 DEBUG nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.745 2 DEBUG nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.858 2 INFO nova.scheduler.client.report [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [req-0f43fa76-46cd-48f9-b17d-a94de693cde7] Created resource provider record via placement API for resource provider with UUID d63acd5d-c9c0-44fc-813b-0eadb368ddab and name compute-0.ctlplane.example.com.#033[00m
Oct 12 17:18:31 np0005481680 nova_compute[264665]: 2025-10-12 21:18:31.915 2 DEBUG oslo_concurrency.processutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:18:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:32] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:18:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:32] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Oct 12 17:18:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:32 np0005481680 systemd[1]: session-55.scope: Deactivated successfully.
Oct 12 17:18:32 np0005481680 systemd[1]: session-55.scope: Consumed 3min 21.068s CPU time.
Oct 12 17:18:32 np0005481680 systemd-logind[783]: Session 55 logged out. Waiting for processes to exit.
Oct 12 17:18:32 np0005481680 systemd-logind[783]: Removed session 55.
Oct 12 17:18:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:32 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:32 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f40028a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:18:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313776339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.423 2 DEBUG oslo_concurrency.processutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.432 2 DEBUG nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 12 17:18:32 np0005481680 nova_compute[264665]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.433 2 INFO nova.virt.libvirt.host [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.434 2 DEBUG nova.compute.provider_tree [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.435 2 DEBUG nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.478 2 DEBUG nova.scheduler.client.report [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Updated inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.478 2 DEBUG nova.compute.provider_tree [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Updating resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.479 2 DEBUG nova.compute.provider_tree [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:18:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.570 2 DEBUG nova.compute.provider_tree [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Updating resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 12 17:18:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.592 2 DEBUG nova.compute.resource_tracker [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.592 2 DEBUG oslo_concurrency.lockutils [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.593 2 DEBUG nova.service [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.689 2 DEBUG nova.service [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct 12 17:18:32 np0005481680 nova_compute[264665]: 2025-10-12 21:18:32.689 2 DEBUG nova.servicegroup.drivers.db [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct 12 17:18:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:32 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:18:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:18:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:34 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:34 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:34 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f40028a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:36 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e40036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:36 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:36 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:37.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:18:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:37.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:18:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:38 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:38 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:38 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:39 np0005481680 podman[265071]: 2025-10-12 21:18:39.885237973 +0000 UTC m=+0.094683472 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:18:39 np0005481680 podman[265072]: 2025-10-12 21:18:39.977962697 +0000 UTC m=+0.187663683 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:18:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:40 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:40 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:18:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:40 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:42] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:18:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:42] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:18:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:42 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:42 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e8003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:42 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:44 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:44 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:44 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:46.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:46 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:46 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:46.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:46 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f620c0013a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:47.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:18:47 np0005481680 podman[265125]: 2025-10-12 21:18:47.137289177 +0000 UTC m=+0.094059036 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 12 17:18:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:48.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:48 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:18:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:18:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:48 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:18:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:48 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:50 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:50 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:18:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:50 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62100011c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:18:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:18:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:18:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:52 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:53 np0005481680 ceph-osd[81892]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000067s
Oct 12 17:18:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:54 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:54 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:54 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:18:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:18:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:18:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:56 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:56 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:56.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:56 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:57.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:18:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:57.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:18:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:18:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:18:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:18:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:18:58.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:18:58 np0005481680 podman[265160]: 2025-10-12 21:18:58.120633207 +0000 UTC m=+0.078183558 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:18:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:18:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:18:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:18:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:18:58.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:18:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:18:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:18:58 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:00.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:00 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:00 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:00.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:19:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:00 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003a00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:19:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:19:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6210001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:02 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:19:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:19:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:04 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003a20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:04 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62100095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:04.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:04 np0005481680 nova_compute[264665]: 2025-10-12 21:19:04.691 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:04 np0005481680 nova_compute[264665]: 2025-10-12 21:19:04.726 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:04 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003a40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:06.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:06 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f62100095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:07.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:19:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:08 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:08 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:08 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003a60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:10 np0005481680 podman[265216]: 2025-10-12 21:19:10.123396982 +0000 UTC m=+0.086170980 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:19:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:10 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f621000a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:10 np0005481680 podman[265217]: 2025-10-12 21:19:10.220941615 +0000 UTC m=+0.175973037 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 12 17:19:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:10 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:10.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:19:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:10 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:12] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:19:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:12] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Oct 12 17:19:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003a80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f621000a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:12.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:12 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61e4003aa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:14.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:14 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f621000a2b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:16.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:16 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f61f4004530 fd 42 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.272678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956272730, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2102, "num_deletes": 251, "total_data_size": 4266894, "memory_usage": 4351208, "flush_reason": "Manual Compaction"}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956301248, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4163989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19912, "largest_seqno": 22012, "table_properties": {"data_size": 4154513, "index_size": 6034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18874, "raw_average_key_size": 20, "raw_value_size": 4135857, "raw_average_value_size": 4390, "num_data_blocks": 265, "num_entries": 942, "num_filter_entries": 942, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303736, "oldest_key_time": 1760303736, "file_creation_time": 1760303956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 28647 microseconds, and 13442 cpu microseconds.
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.301321) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4163989 bytes OK
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.301350) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.303470) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.303494) EVENT_LOG_v1 {"time_micros": 1760303956303486, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.303518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4258481, prev total WAL file size 4258481, number of live WAL files 2.
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.305575) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4066KB)], [44(12MB)]
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956305630, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16867726, "oldest_snapshot_seqno": -1}
Oct 12 17:19:16 np0005481680 kernel: ganesha.nfsd[262791]: segfault at 50 ip 00007f62be57a32e sp 00007f628cff8210 error 4 in libntirpc.so.5.8[7f62be55f000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 12 17:19:16 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:19:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[260712]: 12/10/2025 21:19:16 : epoch 68ec1b00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6200001f20 fd 42 proxy ignored for local
Oct 12 17:19:16 np0005481680 systemd[1]: Started Process Core Dump (PID 265269/UID 0).
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5380 keys, 14679656 bytes, temperature: kUnknown
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956420047, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14679656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14641467, "index_size": 23615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 135717, "raw_average_key_size": 25, "raw_value_size": 14541897, "raw_average_value_size": 2702, "num_data_blocks": 977, "num_entries": 5380, "num_filter_entries": 5380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760303956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.420677) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14679656 bytes
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.422608) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.9 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.1 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5900, records dropped: 520 output_compression: NoCompression
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.422639) EVENT_LOG_v1 {"time_micros": 1760303956422624, "job": 22, "event": "compaction_finished", "compaction_time_micros": 114804, "compaction_time_cpu_micros": 50272, "output_level": 6, "num_output_files": 1, "total_output_size": 14679656, "num_input_records": 5900, "num_output_records": 5380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956424253, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760303956429326, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.305456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.429495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.429504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.429508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.429512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:19:16.429516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:19:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:16.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:17.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:19:17 np0005481680 podman[265296]: 2025-10-12 21:19:17.397264724 +0000 UTC m=+0.082219259 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:19:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211917 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:19:17 np0005481680 systemd-coredump[265270]: Process 260716 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007f62be57a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:19:17 np0005481680 systemd[1]: systemd-coredump@11-265269-0.service: Deactivated successfully.
Oct 12 17:19:17 np0005481680 systemd[1]: systemd-coredump@11-265269-0.service: Consumed 1.248s CPU time.
Oct 12 17:19:17 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:19:17 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:19:17 np0005481680 podman[265363]: 2025-10-12 21:19:17.815389644 +0000 UTC m=+0.043237989 container died 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:19:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-125c54a8fb6f4cc8f475f6c0c9679418a0960e4bfb0c9a75f0e8c0404c387ec4-merged.mount: Deactivated successfully.
Oct 12 17:19:17 np0005481680 podman[265363]: 2025-10-12 21:19:17.87684633 +0000 UTC m=+0.104694665 container remove 5c2771c7ae909770ad771873b5b80f7bd8e86689234ad2e431f48716dee6dfb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:19:17 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:18.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:19:18 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:19:18 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.958s CPU time.
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:19:18
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes']
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:19:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:19:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:19:18.353 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:19:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:19:18.354 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:19:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:19:18.355 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:19:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:18.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:19:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.812359457 +0000 UTC m=+0.068919436 container create b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:19:18 np0005481680 systemd[1]: Started libpod-conmon-b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3.scope.
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.783633594 +0000 UTC m=+0.040193633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.917618585 +0000 UTC m=+0.174178614 container init b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.928342134 +0000 UTC m=+0.184902123 container start b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.932473438 +0000 UTC m=+0.189033427 container attach b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:19:18 np0005481680 quirky_dirac[265530]: 167 167
Oct 12 17:19:18 np0005481680 systemd[1]: libpod-b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3.scope: Deactivated successfully.
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.937423333 +0000 UTC m=+0.193983322 container died b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 17:19:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cd10f49de80db8cff1c8d748015f1007a59ccbccc543bd5fafad360d4629217e-merged.mount: Deactivated successfully.
Oct 12 17:19:18 np0005481680 podman[265514]: 2025-10-12 21:19:18.990769555 +0000 UTC m=+0.247329544 container remove b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_dirac, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:19:19 np0005481680 systemd[1]: libpod-conmon-b6bba2bf6e01cdeb0fbe772af5fbe29c2f7fe1491aef5998affadc4107ff3cf3.scope: Deactivated successfully.
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.245883423 +0000 UTC m=+0.074014643 container create a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:19:19 np0005481680 systemd[1]: Started libpod-conmon-a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d.scope.
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.218174896 +0000 UTC m=+0.046306176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.358821905 +0000 UTC m=+0.186953175 container init a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.376054319 +0000 UTC m=+0.204185539 container start a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.380285655 +0000 UTC m=+0.208416935 container attach a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:19:19 np0005481680 nostalgic_stonebraker[265574]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:19:19 np0005481680 nostalgic_stonebraker[265574]: --> All data devices are unavailable
Oct 12 17:19:19 np0005481680 systemd[1]: libpod-a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d.scope: Deactivated successfully.
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.790515646 +0000 UTC m=+0.618646876 container died a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:19:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-893897e15f096c3d6d503ad50db47a7af81f59dc60158bf7f78d6fb9abb087c9-merged.mount: Deactivated successfully.
Oct 12 17:19:19 np0005481680 podman[265557]: 2025-10-12 21:19:19.849395997 +0000 UTC m=+0.677527217 container remove a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:19:19 np0005481680 systemd[1]: libpod-conmon-a54905a63ebd50a5c4b8e8bc36a02e4b18ece1f0955540217bec2f0ac44a4a3d.scope: Deactivated successfully.
Oct 12 17:19:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:20.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:20 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.631441353 +0000 UTC m=+0.064972196 container create 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:19:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:20 np0005481680 systemd[1]: Started libpod-conmon-2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41.scope.
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.605856939 +0000 UTC m=+0.039387832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.733938972 +0000 UTC m=+0.167469865 container init 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.744338613 +0000 UTC m=+0.177869456 container start 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.748790076 +0000 UTC m=+0.182320949 container attach 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:19:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:20 np0005481680 sweet_albattani[265735]: 167 167
Oct 12 17:19:20 np0005481680 systemd[1]: libpod-2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41.scope: Deactivated successfully.
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.752968631 +0000 UTC m=+0.186499464 container died 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:19:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ae0a47badf3fa48e767f92e437fa54682e7aa4bdb5b2bb979ee237db1a7fcc04-merged.mount: Deactivated successfully.
Oct 12 17:19:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 12 17:19:20 np0005481680 podman[265719]: 2025-10-12 21:19:20.808172419 +0000 UTC m=+0.241703252 container remove 2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:19:20 np0005481680 systemd[1]: libpod-conmon-2b150a9f523268026292594d7e1b6abeb51aea999e71c88f851a74689de47b41.scope: Deactivated successfully.
Oct 12 17:19:21 np0005481680 podman[265758]: 2025-10-12 21:19:21.061840031 +0000 UTC m=+0.068676469 container create 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:19:21 np0005481680 systemd[1]: Started libpod-conmon-07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94.scope.
Oct 12 17:19:21 np0005481680 podman[265758]: 2025-10-12 21:19:21.033214691 +0000 UTC m=+0.040051179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb817b4cfefdaba9d36599da3f67b1c503a9a56d3af3f80feebaee9c13f6be6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb817b4cfefdaba9d36599da3f67b1c503a9a56d3af3f80feebaee9c13f6be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb817b4cfefdaba9d36599da3f67b1c503a9a56d3af3f80feebaee9c13f6be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb817b4cfefdaba9d36599da3f67b1c503a9a56d3af3f80feebaee9c13f6be6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:21 np0005481680 podman[265758]: 2025-10-12 21:19:21.16312925 +0000 UTC m=+0.169965688 container init 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:19:21 np0005481680 podman[265758]: 2025-10-12 21:19:21.174361453 +0000 UTC m=+0.181197881 container start 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:19:21 np0005481680 podman[265758]: 2025-10-12 21:19:21.178316111 +0000 UTC m=+0.185152539 container attach 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]: {
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:    "0": [
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:        {
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "devices": [
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "/dev/loop3"
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            ],
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "lv_name": "ceph_lv0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "lv_size": "21470642176",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "name": "ceph_lv0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "tags": {
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.cluster_name": "ceph",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.crush_device_class": "",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.encrypted": "0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.osd_id": "0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.type": "block",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.vdo": "0",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:                "ceph.with_tpm": "0"
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            },
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "type": "block",
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:            "vg_name": "ceph_vg0"
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:        }
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]:    ]
Oct 12 17:19:21 np0005481680 epic_chandrasekhar[265774]: }
Oct 12 17:19:21 np0005481680 systemd[1]: libpod-07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94.scope: Deactivated successfully.
Oct 12 17:19:21 np0005481680 podman[265784]: 2025-10-12 21:19:21.577109215 +0000 UTC m=+0.040521441 container died 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:19:21 np0005481680 systemd[1]: var-lib-containers-storage-overlay-efb817b4cfefdaba9d36599da3f67b1c503a9a56d3af3f80feebaee9c13f6be6-merged.mount: Deactivated successfully.
Oct 12 17:19:21 np0005481680 podman[265784]: 2025-10-12 21:19:21.641521805 +0000 UTC m=+0.104934021 container remove 07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_chandrasekhar, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:19:21 np0005481680 systemd[1]: libpod-conmon-07fcc738b9e4968f4c1e3ec1cfa71f8774a533604693a6417b9a7ec0155c0e94.scope: Deactivated successfully.
Oct 12 17:19:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:19:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:19:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:22.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211922 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.419587391 +0000 UTC m=+0.067549100 container create 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:19:22 np0005481680 systemd[1]: Started libpod-conmon-37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6.scope.
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.392317734 +0000 UTC m=+0.040279493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.527207459 +0000 UTC m=+0.175169218 container init 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.535793185 +0000 UTC m=+0.183754904 container start 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.540423441 +0000 UTC m=+0.188385210 container attach 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:19:22 np0005481680 zen_borg[265908]: 167 167
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.542827251 +0000 UTC m=+0.190788960 container died 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:19:22 np0005481680 systemd[1]: libpod-37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6.scope: Deactivated successfully.
Oct 12 17:19:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-57460af5f17a38480a5ff22e391220ff75c40f4405943de904b6f3ebeeb038dc-merged.mount: Deactivated successfully.
Oct 12 17:19:22 np0005481680 podman[265892]: 2025-10-12 21:19:22.596350188 +0000 UTC m=+0.244311867 container remove 37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:19:22 np0005481680 systemd[1]: libpod-conmon-37cc5a0a4f8fdae45ce55c17465b344d1654682b3448040e04ec38aa0bd6aab6.scope: Deactivated successfully.
Oct 12 17:19:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:22.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 12 17:19:22 np0005481680 podman[265932]: 2025-10-12 21:19:22.825916894 +0000 UTC m=+0.069787667 container create 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:19:22 np0005481680 systemd[1]: Started libpod-conmon-7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914.scope.
Oct 12 17:19:22 np0005481680 podman[265932]: 2025-10-12 21:19:22.794550065 +0000 UTC m=+0.038420888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:19:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d97f98dcf2dcd187d1eeaf813aaceb1b79257a74f8645b331594418701094aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d97f98dcf2dcd187d1eeaf813aaceb1b79257a74f8645b331594418701094aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d97f98dcf2dcd187d1eeaf813aaceb1b79257a74f8645b331594418701094aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d97f98dcf2dcd187d1eeaf813aaceb1b79257a74f8645b331594418701094aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:23 np0005481680 podman[265932]: 2025-10-12 21:19:23.039706933 +0000 UTC m=+0.283577756 container init 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:19:23 np0005481680 podman[265932]: 2025-10-12 21:19:23.051758795 +0000 UTC m=+0.295629578 container start 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 17:19:23 np0005481680 podman[265932]: 2025-10-12 21:19:23.072680342 +0000 UTC m=+0.316551175 container attach 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 17:19:23 np0005481680 lvm[266025]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:19:23 np0005481680 lvm[266025]: VG ceph_vg0 finished
Oct 12 17:19:23 np0005481680 wonderful_pare[265949]: {}
Oct 12 17:19:23 np0005481680 systemd[1]: libpod-7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914.scope: Deactivated successfully.
Oct 12 17:19:23 np0005481680 podman[265932]: 2025-10-12 21:19:23.975014904 +0000 UTC m=+1.218885677 container died 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:19:23 np0005481680 systemd[1]: libpod-7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914.scope: Consumed 1.532s CPU time.
Oct 12 17:19:24 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4d97f98dcf2dcd187d1eeaf813aaceb1b79257a74f8645b331594418701094aa-merged.mount: Deactivated successfully.
Oct 12 17:19:24 np0005481680 podman[265932]: 2025-10-12 21:19:24.034033889 +0000 UTC m=+1.277904662 container remove 7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:19:24 np0005481680 systemd[1]: libpod-conmon-7b93b88c7071502e46201f8932ce08f2ecc3bd542b8ecec5d639a5165fdf2914.scope: Deactivated successfully.
Oct 12 17:19:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:19:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:24.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:19:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:24.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:19:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:25 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:19:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:26.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:19:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:27.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:19:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:28.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 12.
Oct 12 17:19:28 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:19:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.958s CPU time.
Oct 12 17:19:28 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:19:28 np0005481680 podman[266071]: 2025-10-12 21:19:28.388290908 +0000 UTC m=+0.095119119 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:19:28 np0005481680 podman[266140]: 2025-10-12 21:19:28.609927635 +0000 UTC m=+0.057807416 container create acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:19:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:28 np0005481680 podman[266140]: 2025-10-12 21:19:28.58269559 +0000 UTC m=+0.030575431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:19:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54261b7f8d9d6a78a7fcee7acb17f09729cd25c87b4a797e67bb97ba58e9edf/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54261b7f8d9d6a78a7fcee7acb17f09729cd25c87b4a797e67bb97ba58e9edf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54261b7f8d9d6a78a7fcee7acb17f09729cd25c87b4a797e67bb97ba58e9edf/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:28 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54261b7f8d9d6a78a7fcee7acb17f09729cd25c87b4a797e67bb97ba58e9edf/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:19:28 np0005481680 podman[266140]: 2025-10-12 21:19:28.702276923 +0000 UTC m=+0.150156754 container init acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:19:28 np0005481680 podman[266140]: 2025-10-12 21:19:28.71743715 +0000 UTC m=+0.165316931 container start acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:19:28 np0005481680 bash[266140]: acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62
Oct 12 17:19:28 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:19:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:19:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.665 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.666 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.667 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.667 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.703 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.703 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.704 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.705 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.705 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.705 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.706 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.706 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.707 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.748 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.749 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.749 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.750 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:19:29 np0005481680 nova_compute[264665]: 2025-10-12 21:19:29.751 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:19:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:30.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:19:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408796238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.272 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.510 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.513 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4930MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.513 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.514 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.653 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.653 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:19:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:30.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:30 np0005481680 nova_compute[264665]: 2025-10-12 21:19:30.702 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:19:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:19:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050130575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:19:31 np0005481680 nova_compute[264665]: 2025-10-12 21:19:31.183 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:19:31 np0005481680 nova_compute[264665]: 2025-10-12 21:19:31.192 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:19:31 np0005481680 nova_compute[264665]: 2025-10-12 21:19:31.225 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:19:31 np0005481680 nova_compute[264665]: 2025-10-12 21:19:31.228 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:19:31 np0005481680 nova_compute[264665]: 2025-10-12 21:19:31.229 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:19:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211931 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:19:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:19:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:19:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:32.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:32.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:19:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:19:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:34.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:34.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:19:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 12 17:19:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 12 17:19:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:19:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:19:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:19:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:19:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:37.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:19:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:38.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:38 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:38 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:38 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:39 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=cleanup t=2025-10-12T21:19:39.144235643Z level=info msg="Completed cleanup jobs" duration=15.130406ms
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugins.update.checker t=2025-10-12T21:19:39.280959732Z level=info msg="Update check succeeded" duration=51.862264ms
Oct 12 17:19:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana.update.checker t=2025-10-12T21:19:39.312489617Z level=info msg="Update check succeeded" duration=56.814871ms
Oct 12 17:19:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:40.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:19:41 np0005481680 podman[266281]: 2025-10-12 21:19:41.155508313 +0000 UTC m=+0.106135899 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 12 17:19:41 np0005481680 podman[266282]: 2025-10-12 21:19:41.200150163 +0000 UTC m=+0.150779119 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 12 17:19:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:41 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:19:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:41 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:19:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:41 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:19:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:41 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:42] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:19:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:42] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:19:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001f:nfs.cephfs.2: -2
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:19:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:19:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 255 B/s wr, 1 op/s
Oct 12 17:19:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:44 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:44 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:44.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 12 17:19:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:44 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:19:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:19:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:46 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.24614 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.24620 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.24614 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 12 17:19:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/211946 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:19:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:46 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:46.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:46 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:47.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:19:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:47.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:19:48 np0005481680 podman[266348]: 2025-10-12 21:19:48.124121018 +0000 UTC m=+0.084627941 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3)
Oct 12 17:19:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:48 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:19:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:19:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:48 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:19:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:48.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:48 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:50 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:50 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:50 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:52] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:19:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:19:52] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:19:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:52 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:52 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:52.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Oct 12 17:19:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:52 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:19:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:19:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:54 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:54 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:19:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:54 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:19:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:56 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:56 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:56.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:57 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:19:57.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:19:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:19:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:19:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:19:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:58 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:58 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:19:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:19:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:19:58.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:19:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:19:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:19:59 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:19:59 np0005481680 podman[266380]: 2025-10-12 21:19:59.118377908 +0000 UTC m=+0.078898266 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:20:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 17:20:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:00 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:00 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:00.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:00 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 17:20:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:01 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:02] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:20:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:02] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Oct 12 17:20:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:02 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:02 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:03 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:20:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:20:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:04 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:04 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:20:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:05 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.24626 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:20:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 12 17:20:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/642377862' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:20:05 np0005481680 ceph-mgr[73901]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 12 17:20:06 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 12 17:20:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:06.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:07 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:07.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:20:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:08 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:08 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:09 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:10 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:10 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:11 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:12] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:20:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:12] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:20:12 np0005481680 podman[266438]: 2025-10-12 21:20:12.140879243 +0000 UTC m=+0.096218066 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:20:12 np0005481680 podman[266439]: 2025-10-12 21:20:12.18462098 +0000 UTC m=+0.135672124 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:20:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:12 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:12 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:12.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:13 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c00a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:14.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:14 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:14 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:20:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:15 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:16 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:16 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380001bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:17 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:17.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:20:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:20:18
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', 'vms', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', '.mgr']
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:20:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:18 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:20:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:20:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:18.355 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:20:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:18.356 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:20:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:18.356 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:18 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:20:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:19 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380001d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:19 np0005481680 podman[266493]: 2025-10-12 21:20:19.115174467 +0000 UTC m=+0.074318628 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:20:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:20 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:20 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:21 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:22] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:20:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:22] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:20:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:22 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380002650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:22 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:22.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:23 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:24.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:24 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:24 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380002650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:20:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:25 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 17:20:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:26.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:26 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:26 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:26.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:20:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:20:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:27 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380002650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:27.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:20:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:27.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:20:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:27.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:28.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:28 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:20:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:20:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:29 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.626713744 +0000 UTC m=+0.193885930 container create c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:20:29 np0005481680 systemd[1]: Started libpod-conmon-c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a.scope.
Oct 12 17:20:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.686646554 +0000 UTC m=+0.253818760 container init c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.698296801 +0000 UTC m=+0.265468987 container start c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.701103122 +0000 UTC m=+0.268275558 container attach c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 17:20:29 np0005481680 hardcore_dijkstra[266744]: 167 167
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.609229978 +0000 UTC m=+0.176402184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.705903986 +0000 UTC m=+0.273076212 container died c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:20:29 np0005481680 systemd[1]: libpod-c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a.scope: Deactivated successfully.
Oct 12 17:20:29 np0005481680 podman[266741]: 2025-10-12 21:20:29.715257534 +0000 UTC m=+0.058451923 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 12 17:20:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f49e760d03825c6259261da4879f401e2c86b359a6e02b1ede0b23d2d77b8238-merged.mount: Deactivated successfully.
Oct 12 17:20:29 np0005481680 podman[266725]: 2025-10-12 21:20:29.76366872 +0000 UTC m=+0.330840936 container remove c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:20:29 np0005481680 systemd[1]: libpod-conmon-c6d055f307fb73e5528370b00dad93ea8cfaa1e8309e9128a12060f1a2fe119a.scope: Deactivated successfully.
Oct 12 17:20:29 np0005481680 podman[266786]: 2025-10-12 21:20:29.996406191 +0000 UTC m=+0.060957917 container create 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:20:30 np0005481680 systemd[1]: Started libpod-conmon-7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf.scope.
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:29.969454163 +0000 UTC m=+0.034005939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:30.107303141 +0000 UTC m=+0.171854857 container init 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:30.119330049 +0000 UTC m=+0.183881745 container start 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:30.122641813 +0000 UTC m=+0.187193519 container attach 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:20:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:30 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380003ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:30 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:30 np0005481680 friendly_bhabha[266803]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:20:30 np0005481680 friendly_bhabha[266803]: --> All data devices are unavailable
Oct 12 17:20:30 np0005481680 systemd[1]: libpod-7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf.scope: Deactivated successfully.
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:30.452474293 +0000 UTC m=+0.517026029 container died 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:20:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ac449c5310ac7530d5397a1ed51538a7d5dfcdb011e0547fba5f4ac1e353a9dc-merged.mount: Deactivated successfully.
Oct 12 17:20:30 np0005481680 podman[266786]: 2025-10-12 21:20:30.513346026 +0000 UTC m=+0.577897762 container remove 7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:20:30 np0005481680 systemd[1]: libpod-conmon-7fce819c7009027097a252344e24aefac2367bf2471405f3b05828269405a6bf.scope: Deactivated successfully.
Oct 12 17:20:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:31 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.222 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.256 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.257 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.257637236 +0000 UTC m=+0.069213358 container create 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.257 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.275 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.275 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.276 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.277 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.277 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.278 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.279 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.279 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.280 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.305 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.305 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.306 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.306 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:20:31 np0005481680 systemd[1]: Started libpod-conmon-74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52.scope.
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.307 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.227320582 +0000 UTC m=+0.038896754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.360108082 +0000 UTC m=+0.171684234 container init 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.371690467 +0000 UTC m=+0.183266579 container start 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.376613263 +0000 UTC m=+0.188189375 container attach 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:20:31 np0005481680 objective_nash[266941]: 167 167
Oct 12 17:20:31 np0005481680 systemd[1]: libpod-74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52.scope: Deactivated successfully.
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.379310122 +0000 UTC m=+0.190886244 container died 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:20:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-733d6687790e1a3072840278d4a26b91318c7b2c6a2e7129fb97944be73bf2e4-merged.mount: Deactivated successfully.
Oct 12 17:20:31 np0005481680 podman[266924]: 2025-10-12 21:20:31.435907396 +0000 UTC m=+0.247483508 container remove 74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nash, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:20:31 np0005481680 systemd[1]: libpod-conmon-74ab3b1ecbe063811bf4ee48ebb1033398e2929747bf9ebfa21874e90be9dc52.scope: Deactivated successfully.
Oct 12 17:20:31 np0005481680 podman[266985]: 2025-10-12 21:20:31.65973067 +0000 UTC m=+0.057143860 container create 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:20:31 np0005481680 systemd[1]: Started libpod-conmon-1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1.scope.
Oct 12 17:20:31 np0005481680 podman[266985]: 2025-10-12 21:20:31.630773581 +0000 UTC m=+0.028186761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90823ed41c9152835aeaaae5319c51ef140b2b1b4a1323100ade9908b8e37012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90823ed41c9152835aeaaae5319c51ef140b2b1b4a1323100ade9908b8e37012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90823ed41c9152835aeaaae5319c51ef140b2b1b4a1323100ade9908b8e37012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90823ed41c9152835aeaaae5319c51ef140b2b1b4a1323100ade9908b8e37012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:31 np0005481680 podman[266985]: 2025-10-12 21:20:31.757924077 +0000 UTC m=+0.155337287 container init 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:20:31 np0005481680 podman[266985]: 2025-10-12 21:20:31.769685037 +0000 UTC m=+0.167098227 container start 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:20:31 np0005481680 podman[266985]: 2025-10-12 21:20:31.773731631 +0000 UTC m=+0.171144821 container attach 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:20:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:20:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427165938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:20:31 np0005481680 nova_compute[264665]: 2025-10-12 21:20:31.819 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:20:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:32] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:20:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:32] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.086 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]: {
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:    "0": [
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:        {
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "devices": [
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "/dev/loop3"
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            ],
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "lv_name": "ceph_lv0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "lv_size": "21470642176",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.088 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4879MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "name": "ceph_lv0",
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.089 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "tags": {
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.090 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.cluster_name": "ceph",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.crush_device_class": "",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.encrypted": "0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.osd_id": "0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.type": "block",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.vdo": "0",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:                "ceph.with_tpm": "0"
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            },
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "type": "block",
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:            "vg_name": "ceph_vg0"
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:        }
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]:    ]
Oct 12 17:20:32 np0005481680 priceless_elgamal[267002]: }
Oct 12 17:20:32 np0005481680 systemd[1]: libpod-1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1.scope: Deactivated successfully.
Oct 12 17:20:32 np0005481680 podman[266985]: 2025-10-12 21:20:32.131158284 +0000 UTC m=+0.528571464 container died 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:20:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-90823ed41c9152835aeaaae5319c51ef140b2b1b4a1323100ade9908b8e37012-merged.mount: Deactivated successfully.
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.165 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.167 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:20:32 np0005481680 podman[266985]: 2025-10-12 21:20:32.18543472 +0000 UTC m=+0.582847910 container remove 1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elgamal, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.188 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:20:32 np0005481680 systemd[1]: libpod-conmon-1a9585ad8c7ea9ec57142cd73bd6d7b1f6921257e7fabf73d51ee0dbbd0256e1.scope: Deactivated successfully.
Oct 12 17:20:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:32 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:32 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380003ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:20:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/795382785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.673 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.681 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.702 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.705 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:20:32 np0005481680 nova_compute[264665]: 2025-10-12 21:20:32.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:20:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:32 np0005481680 podman[267135]: 2025-10-12 21:20:32.97170084 +0000 UTC m=+0.072095770 container create 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:20:33 np0005481680 systemd[1]: Started libpod-conmon-09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3.scope.
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:32.941304314 +0000 UTC m=+0.041699294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:33 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:33 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:33.080837346 +0000 UTC m=+0.181232347 container init 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:33.092583766 +0000 UTC m=+0.192978666 container start 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:33.097448201 +0000 UTC m=+0.197843151 container attach 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:20:33 np0005481680 hungry_archimedes[267152]: 167 167
Oct 12 17:20:33 np0005481680 systemd[1]: libpod-09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3.scope: Deactivated successfully.
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:33.102232302 +0000 UTC m=+0.202627242 container died 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 17:20:33 np0005481680 systemd[1]: var-lib-containers-storage-overlay-dbdff92539ea4096925ff95d40b274c71eb18347e6e65f82c15b838a9156763d-merged.mount: Deactivated successfully.
Oct 12 17:20:33 np0005481680 nova_compute[264665]: 2025-10-12 21:20:33.142 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:20:33 np0005481680 podman[267135]: 2025-10-12 21:20:33.156891518 +0000 UTC m=+0.257286458 container remove 09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 12 17:20:33 np0005481680 systemd[1]: libpod-conmon-09bd65de26d2ff3c7b32e2c9e10f1538b38bd37772c9a0188bfb8ae1d42b37a3.scope: Deactivated successfully.
Oct 12 17:20:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:20:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:20:33 np0005481680 podman[267178]: 2025-10-12 21:20:33.377379206 +0000 UTC m=+0.067757811 container create d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:20:33 np0005481680 systemd[1]: Started libpod-conmon-d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428.scope.
Oct 12 17:20:33 np0005481680 podman[267178]: 2025-10-12 21:20:33.349773392 +0000 UTC m=+0.040151987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:20:33 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:20:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025e319b11c8b25174e75c1be0fab1cd15cd44e7acc7dca3cbb979f2f8435bca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025e319b11c8b25174e75c1be0fab1cd15cd44e7acc7dca3cbb979f2f8435bca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025e319b11c8b25174e75c1be0fab1cd15cd44e7acc7dca3cbb979f2f8435bca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025e319b11c8b25174e75c1be0fab1cd15cd44e7acc7dca3cbb979f2f8435bca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:20:33 np0005481680 podman[267178]: 2025-10-12 21:20:33.486461311 +0000 UTC m=+0.176839896 container init d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:20:33 np0005481680 podman[267178]: 2025-10-12 21:20:33.500873689 +0000 UTC m=+0.191252254 container start d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:20:33 np0005481680 podman[267178]: 2025-10-12 21:20:33.505044965 +0000 UTC m=+0.195423580 container attach d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:20:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:34 np0005481680 lvm[267270]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:20:34 np0005481680 lvm[267270]: VG ceph_vg0 finished
Oct 12 17:20:34 np0005481680 funny_snyder[267194]: {}
Oct 12 17:20:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:34 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:34 np0005481680 systemd[1]: libpod-d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428.scope: Deactivated successfully.
Oct 12 17:20:34 np0005481680 podman[267178]: 2025-10-12 21:20:34.429554735 +0000 UTC m=+1.119933340 container died d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:20:34 np0005481680 systemd[1]: libpod-d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428.scope: Consumed 1.593s CPU time.
Oct 12 17:20:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-025e319b11c8b25174e75c1be0fab1cd15cd44e7acc7dca3cbb979f2f8435bca-merged.mount: Deactivated successfully.
Oct 12 17:20:34 np0005481680 podman[267178]: 2025-10-12 21:20:34.478386201 +0000 UTC m=+1.168764766 container remove d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_snyder, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:20:34 np0005481680 systemd[1]: libpod-conmon-d4ff253a01c26a66ceb402a9168fc6b9848a78b05b59909bf8da6532612e2428.scope: Deactivated successfully.
Oct 12 17:20:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:20:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:20:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:34.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:20:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:35 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc380003ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:35 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:20:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:36 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:36 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:37 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:37.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:20:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:38 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:38 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:39 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:40 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:40 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:41 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:42] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:20:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:42] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:20:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:42.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:42 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc37c003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:43 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:43 np0005481680 podman[267346]: 2025-10-12 21:20:43.152649349 +0000 UTC m=+0.100779432 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:20:43 np0005481680 podman[267347]: 2025-10-12 21:20:43.198705555 +0000 UTC m=+0.146517960 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Oct 12 17:20:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:44 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:44 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:20:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:45 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc368002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:46 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:46 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:47 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:47.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:20:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:48 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:20:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:48 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:20:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:49 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:50 np0005481680 podman[267400]: 2025-10-12 21:20:50.131564581 +0000 UTC m=+0.090150252 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 12 17:20:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:50.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:50 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:50 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:51 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:20:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:20:52] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:20:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:52 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc364003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:52 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:20:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:20:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:20:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:53 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:54 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212054 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:20:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:54 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc358000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:20:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:20:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 12 17:20:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:55 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.114115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056114177, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1204, "num_deletes": 250, "total_data_size": 2061689, "memory_usage": 2086536, "flush_reason": "Manual Compaction"}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056146450, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1306350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22014, "largest_seqno": 23216, "table_properties": {"data_size": 1301790, "index_size": 2020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12069, "raw_average_key_size": 20, "raw_value_size": 1291824, "raw_average_value_size": 2204, "num_data_blocks": 87, "num_entries": 586, "num_filter_entries": 586, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760303956, "oldest_key_time": 1760303956, "file_creation_time": 1760304056, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 32412 microseconds, and 6825 cpu microseconds.
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.146527) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1306350 bytes OK
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.146561) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.191941) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.192009) EVENT_LOG_v1 {"time_micros": 1760304056191993, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.192042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2056299, prev total WAL file size 2056299, number of live WAL files 2.
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.192963) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1275KB)], [47(13MB)]
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056193002, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15986006, "oldest_snapshot_seqno": -1}
Oct 12 17:20:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:56 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5492 keys, 12591894 bytes, temperature: kUnknown
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056346296, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12591894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12556237, "index_size": 20827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138428, "raw_average_key_size": 25, "raw_value_size": 12457996, "raw_average_value_size": 2268, "num_data_blocks": 854, "num_entries": 5492, "num_filter_entries": 5492, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304056, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.346712) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12591894 bytes
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.376363) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 82.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(21.9) write-amplify(9.6) OK, records in: 5966, records dropped: 474 output_compression: NoCompression
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.376396) EVENT_LOG_v1 {"time_micros": 1760304056376381, "job": 24, "event": "compaction_finished", "compaction_time_micros": 153412, "compaction_time_cpu_micros": 30511, "output_level": 6, "num_output_files": 1, "total_output_size": 12591894, "num_input_records": 5966, "num_output_records": 5492, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056376960, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304056381855, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.192877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.381947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.381955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.381958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.381961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:20:56.381964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:20:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:56 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0092a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:20:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:57 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:20:57.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:20:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:20:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:58 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:58 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:20:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:20:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:20:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:20:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:20:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:20:58 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:58.916 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:20:58 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:58.917 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:20:58 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:20:58.918 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:20:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:20:59 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0092a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:00 np0005481680 podman[267431]: 2025-10-12 21:21:00.11675616 +0000 UTC m=+0.078282119 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:21:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:00.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:00 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:00 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:21:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:01 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:21:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:02] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:21:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:02 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0092a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:02 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:02.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:21:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:03 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:21:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:21:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:03 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:21:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:04 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:04 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:04.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:21:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:05 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc358002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc35c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:21:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:06 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:21:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:21:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:07 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc38c0092a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:07.140Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:21:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:07.140Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:21:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:07.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:21:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:08 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc358002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:08 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:21:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:09 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:09 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:21:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:10 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:10 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc358002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:10.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:21:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:11 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:12] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:21:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:12] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Oct 12 17:21:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:12.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:12 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:12 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:21:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:13 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:14 np0005481680 podman[267491]: 2025-10-12 21:21:14.104193512 +0000 UTC m=+0.070672315 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 12 17:21:14 np0005481680 podman[267492]: 2025-10-12 21:21:14.153534872 +0000 UTC m=+0.108827559 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 12 17:21:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:14 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:14 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:21:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:15 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc358003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:16.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:16 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212116 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:21:16 np0005481680 kernel: ganesha.nfsd[266488]: segfault at 50 ip 00007fc43915932e sp 00007fc3feffc210 error 4 in libntirpc.so.5.8[7fc43913e000+2c000] likely on CPU 4 (core 0, socket 4)
Oct 12 17:21:16 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:21:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[266157]: 12/10/2025 21:21:16 : epoch 68ec1b60 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3800047f0 fd 48 proxy ignored for local
Oct 12 17:21:16 np0005481680 systemd[1]: Started Process Core Dump (PID 267539/UID 0).
Oct 12 17:21:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:16.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:21:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:17.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:21:17 np0005481680 systemd-coredump[267540]: Process 266161 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 56:#012#0  0x00007fc43915932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:21:17 np0005481680 systemd[1]: systemd-coredump@12-267539-0.service: Deactivated successfully.
Oct 12 17:21:17 np0005481680 systemd[1]: systemd-coredump@12-267539-0.service: Consumed 1.231s CPU time.
Oct 12 17:21:17 np0005481680 podman[267547]: 2025-10-12 21:21:17.914856887 +0000 UTC m=+0.034298516 container died acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:21:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d54261b7f8d9d6a78a7fcee7acb17f09729cd25c87b4a797e67bb97ba58e9edf-merged.mount: Deactivated successfully.
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:21:18
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.nfs']
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:21:18 np0005481680 podman[267547]: 2025-10-12 21:21:18.265829196 +0000 UTC m=+0.385270795 container remove acf6b2e5ed3b63ed68fca01545f0ee2976f148ad998b4296e8890c2a9357ac62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:21:18 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:21:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:18.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:21:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:21:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:21:18.356 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:21:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:21:18.357 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:21:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:21:18.357 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:18 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:21:18 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.943s CPU time.
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:21:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:18.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:21:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:20 np0005481680 podman[267616]: 2025-10-12 21:21:20.839620098 +0000 UTC m=+0.093792705 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:21:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:20.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:21:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:22] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Oct 12 17:21:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:22] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Oct 12 17:21:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:22.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212122 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:21:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:22.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:21:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:24.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:21:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:26.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:26.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:21:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:27.142Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:21:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:27.142Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:21:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:27.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:21:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:28.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 13.
Oct 12 17:21:28 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:21:28 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.943s CPU time.
Oct 12 17:21:28 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:21:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:28.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 12 17:21:29 np0005481680 podman[267695]: 2025-10-12 21:21:29.129857462 +0000 UTC m=+0.072019109 container create 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 17:21:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3fbc2af06840f5470a3dc9e793d13a9ffa7d148cc58e492aad471012057e51/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3fbc2af06840f5470a3dc9e793d13a9ffa7d148cc58e492aad471012057e51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3fbc2af06840f5470a3dc9e793d13a9ffa7d148cc58e492aad471012057e51/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed3fbc2af06840f5470a3dc9e793d13a9ffa7d148cc58e492aad471012057e51/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:29 np0005481680 podman[267695]: 2025-10-12 21:21:29.10353472 +0000 UTC m=+0.045696367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:29 np0005481680 podman[267695]: 2025-10-12 21:21:29.209240918 +0000 UTC m=+0.151402655 container init 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:21:29 np0005481680 podman[267695]: 2025-10-12 21:21:29.214179084 +0000 UTC m=+0.156340721 container start 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:21:29 np0005481680 bash[267695]: 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20
Oct 12 17:21:29 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:21:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:21:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.678 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.678 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.679 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.679 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.680 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:30 np0005481680 nova_compute[264665]: 2025-10-12 21:21:30.680 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:21:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:30.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:21:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:31 np0005481680 podman[267754]: 2025-10-12 21:21:31.132294997 +0000 UTC m=+0.089488505 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.665 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.689 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.689 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.690 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:21:31 np0005481680 nova_compute[264665]: 2025-10-12 21:21:31.691 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:21:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:32] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Oct 12 17:21:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:32] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Oct 12 17:21:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:21:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332954766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.230 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:21:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:32.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.501 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.503 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4940MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.504 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.505 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.613 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.613 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:21:32 np0005481680 nova_compute[264665]: 2025-10-12 21:21:32.712 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:21:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:21:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:21:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372726151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:21:33 np0005481680 nova_compute[264665]: 2025-10-12 21:21:33.237 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:21:33 np0005481680 nova_compute[264665]: 2025-10-12 21:21:33.250 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:21:33 np0005481680 nova_compute[264665]: 2025-10-12 21:21:33.288 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:21:33 np0005481680 nova_compute[264665]: 2025-10-12 21:21:33.290 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:21:33 np0005481680 nova_compute[264665]: 2025-10-12 21:21:33.290 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:21:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:21:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:21:34 np0005481680 nova_compute[264665]: 2025-10-12 21:21:34.285 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:21:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:34.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:21:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/621363458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:21:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:21:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:21:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:21:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:21:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.090718) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096090763, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 595, "num_deletes": 256, "total_data_size": 707974, "memory_usage": 720232, "flush_reason": "Manual Compaction"}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096098730, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 700503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23218, "largest_seqno": 23811, "table_properties": {"data_size": 697392, "index_size": 1019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6899, "raw_average_key_size": 17, "raw_value_size": 691123, "raw_average_value_size": 1781, "num_data_blocks": 46, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304056, "oldest_key_time": 1760304056, "file_creation_time": 1760304096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 8068 microseconds, and 4756 cpu microseconds.
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.098786) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 700503 bytes OK
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.098810) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.100705) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.100726) EVENT_LOG_v1 {"time_micros": 1760304096100719, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.100749) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 704777, prev total WAL file size 704777, number of live WAL files 2.
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.101517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(684KB)], [50(12MB)]
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096101565, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13292397, "oldest_snapshot_seqno": -1}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5357 keys, 13148723 bytes, temperature: kUnknown
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096198002, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13148723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13113029, "index_size": 21184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 136742, "raw_average_key_size": 25, "raw_value_size": 13016304, "raw_average_value_size": 2429, "num_data_blocks": 866, "num_entries": 5357, "num_filter_entries": 5357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.198406) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13148723 bytes
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.200240) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.6 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.0 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(37.7) write-amplify(18.8) OK, records in: 5880, records dropped: 523 output_compression: NoCompression
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.200275) EVENT_LOG_v1 {"time_micros": 1760304096200259, "job": 26, "event": "compaction_finished", "compaction_time_micros": 96579, "compaction_time_cpu_micros": 46453, "output_level": 6, "num_output_files": 1, "total_output_size": 13148723, "num_input_records": 5880, "num_output_records": 5357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096200646, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304096205547, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.101378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.205647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.205657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.205661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.205665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:21:36.205669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:21:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.750175885 +0000 UTC m=+0.056201867 container create b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:21:36 np0005481680 systemd[1]: Started libpod-conmon-b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0.scope.
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.724092139 +0000 UTC m=+0.030118201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.852027294 +0000 UTC m=+0.158053306 container init b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.864712178 +0000 UTC m=+0.170738180 container start b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.868876714 +0000 UTC m=+0.174902746 container attach b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:36 np0005481680 strange_swartz[268012]: 167 167
Oct 12 17:21:36 np0005481680 systemd[1]: libpod-b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0.scope: Deactivated successfully.
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.875499083 +0000 UTC m=+0.181525095 container died b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 17:21:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:21:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-08ece42351fa0d6c00d4a1fc8257a6fec90b99287a952ca8618046aa0327f2ca-merged.mount: Deactivated successfully.
Oct 12 17:21:36 np0005481680 podman[267996]: 2025-10-12 21:21:36.933780522 +0000 UTC m=+0.239806534 container remove b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:36 np0005481680 systemd[1]: libpod-conmon-b852487e397e5ddbcf98303a14df207503a1b260403d97286e5d08e01b510fe0.scope: Deactivated successfully.
Oct 12 17:21:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:37.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.171473908 +0000 UTC m=+0.068734285 container create 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.138921268 +0000 UTC m=+0.036181695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:37 np0005481680 systemd[1]: Started libpod-conmon-6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e.scope.
Oct 12 17:21:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.297751694 +0000 UTC m=+0.195012081 container init 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.311531726 +0000 UTC m=+0.208792103 container start 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.315823227 +0000 UTC m=+0.213083604 container attach 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 17:21:37 np0005481680 flamboyant_blackwell[268057]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:21:37 np0005481680 flamboyant_blackwell[268057]: --> All data devices are unavailable
Oct 12 17:21:37 np0005481680 systemd[1]: libpod-6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e.scope: Deactivated successfully.
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.693163809 +0000 UTC m=+0.590424176 container died 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:21:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e3c80ad9d8197528d4bae2d0bd419ae03a5e72249b2ec0477bcad14b5aad57a9-merged.mount: Deactivated successfully.
Oct 12 17:21:37 np0005481680 podman[268039]: 2025-10-12 21:21:37.754127501 +0000 UTC m=+0.651387868 container remove 6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct 12 17:21:37 np0005481680 systemd[1]: libpod-conmon-6c819e0e429c6c3fd69577b4dd2dc1332e4b08cfb0ea73b4b0a32c2a139c346e.scope: Deactivated successfully.
Oct 12 17:21:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:38.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:38 np0005481680 systemd[1]: packagekit.service: Deactivated successfully.
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.538981958 +0000 UTC m=+0.067047018 container create 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 17:21:38 np0005481680 systemd[1]: Started libpod-conmon-8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249.scope.
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.509449022 +0000 UTC m=+0.037514152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.636013742 +0000 UTC m=+0.164078832 container init 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.646103431 +0000 UTC m=+0.174168501 container start 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.65036249 +0000 UTC m=+0.178427610 container attach 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:21:38 np0005481680 fervent_tu[268194]: 167 167
Oct 12 17:21:38 np0005481680 systemd[1]: libpod-8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249.scope: Deactivated successfully.
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.653426509 +0000 UTC m=+0.181491569 container died 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:21:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-78d80839664904178ddd50ca0d26bafaae315f9f581ce63c548eb2a678256303-merged.mount: Deactivated successfully.
Oct 12 17:21:38 np0005481680 podman[268177]: 2025-10-12 21:21:38.706839606 +0000 UTC m=+0.234904676 container remove 8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:21:38 np0005481680 systemd[1]: libpod-conmon-8a08706fa90427e2fb60ca19e5a206d155c65b6ece6a8e9e8f6ba44f3629e249.scope: Deactivated successfully.
Oct 12 17:21:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:21:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:38.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:38 np0005481680 podman[268218]: 2025-10-12 21:21:38.961937858 +0000 UTC m=+0.065952439 container create 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:38.930040131 +0000 UTC m=+0.034054742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:39 np0005481680 systemd[1]: Started libpod-conmon-1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81.scope.
Oct 12 17:21:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53083e2f8a539afd101175821ef603c120021151f96a0b093f2041a52aa2b31a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53083e2f8a539afd101175821ef603c120021151f96a0b093f2041a52aa2b31a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53083e2f8a539afd101175821ef603c120021151f96a0b093f2041a52aa2b31a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53083e2f8a539afd101175821ef603c120021151f96a0b093f2041a52aa2b31a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:39.118011895 +0000 UTC m=+0.222026476 container init 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:39.129833928 +0000 UTC m=+0.233848499 container start 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:39.186242552 +0000 UTC m=+0.290257113 container attach 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]: {
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:    "0": [
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:        {
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "devices": [
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "/dev/loop3"
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            ],
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "lv_name": "ceph_lv0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "lv_size": "21470642176",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "name": "ceph_lv0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "tags": {
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.cluster_name": "ceph",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.crush_device_class": "",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.encrypted": "0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.osd_id": "0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.type": "block",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.vdo": "0",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:                "ceph.with_tpm": "0"
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            },
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "type": "block",
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:            "vg_name": "ceph_vg0"
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:        }
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]:    ]
Oct 12 17:21:39 np0005481680 peaceful_hamilton[268235]: }
Oct 12 17:21:39 np0005481680 systemd[1]: libpod-1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81.scope: Deactivated successfully.
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:39.482608661 +0000 UTC m=+0.586623232 container died 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:21:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-53083e2f8a539afd101175821ef603c120021151f96a0b093f2041a52aa2b31a-merged.mount: Deactivated successfully.
Oct 12 17:21:39 np0005481680 podman[268218]: 2025-10-12 21:21:39.75394917 +0000 UTC m=+0.857963731 container remove 1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_hamilton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:21:39 np0005481680 systemd[1]: libpod-conmon-1cabc7a90330d437cb789f443f77dee24b31cf2270b70155800bc110cb0bac81.scope: Deactivated successfully.
Oct 12 17:21:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:40.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212140 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.5642674 +0000 UTC m=+0.070300302 container create 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:21:40 np0005481680 systemd[1]: Started libpod-conmon-17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496.scope.
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.53811587 +0000 UTC m=+0.044148842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.706623244 +0000 UTC m=+0.212656206 container init 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.71623258 +0000 UTC m=+0.222265492 container start 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.721062875 +0000 UTC m=+0.227095787 container attach 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:21:40 np0005481680 amazing_rosalind[268370]: 167 167
Oct 12 17:21:40 np0005481680 systemd[1]: libpod-17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496.scope: Deactivated successfully.
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.7251706 +0000 UTC m=+0.231203502 container died 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 12 17:21:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-269f2c8f84259393c68e87e7fe85b37ed37bface57334120a909f29d6d5ad3cf-merged.mount: Deactivated successfully.
Oct 12 17:21:40 np0005481680 podman[268353]: 2025-10-12 21:21:40.782681333 +0000 UTC m=+0.288714265 container remove 17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_rosalind, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:40 np0005481680 systemd[1]: libpod-conmon-17d9e32b4edf9550ac96dabb1c26f7d2b32dcbc8332880843f6beb5598030496.scope: Deactivated successfully.
Oct 12 17:21:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:21:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:40.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:41 np0005481680 podman[268419]: 2025-10-12 21:21:41.034226163 +0000 UTC m=+0.071674106 container create bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:21:41 np0005481680 systemd[1]: Started libpod-conmon-bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241.scope.
Oct 12 17:21:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:41 np0005481680 podman[268419]: 2025-10-12 21:21:41.004496222 +0000 UTC m=+0.041944215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:21:41 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:21:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321c59b50da403f95b5e350272fcf1f1f8bf9ac3137400e7da57f213015b2dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321c59b50da403f95b5e350272fcf1f1f8bf9ac3137400e7da57f213015b2dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321c59b50da403f95b5e350272fcf1f1f8bf9ac3137400e7da57f213015b2dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321c59b50da403f95b5e350272fcf1f1f8bf9ac3137400e7da57f213015b2dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:21:41 np0005481680 podman[268419]: 2025-10-12 21:21:41.136043581 +0000 UTC m=+0.173491564 container init bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:21:41 np0005481680 podman[268419]: 2025-10-12 21:21:41.157441348 +0000 UTC m=+0.194889281 container start bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:21:41 np0005481680 podman[268419]: 2025-10-12 21:21:41.161786289 +0000 UTC m=+0.199234222 container attach bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:21:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:21:41 np0005481680 lvm[268524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:21:41 np0005481680 lvm[268524]: VG ceph_vg0 finished
Oct 12 17:21:41 np0005481680 sharp_fermat[268435]: {}
Oct 12 17:21:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:42] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:21:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:42] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Oct 12 17:21:42 np0005481680 systemd[1]: libpod-bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241.scope: Deactivated successfully.
Oct 12 17:21:42 np0005481680 systemd[1]: libpod-bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241.scope: Consumed 1.475s CPU time.
Oct 12 17:21:42 np0005481680 podman[268419]: 2025-10-12 21:21:42.036264762 +0000 UTC m=+1.073712695 container died bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:21:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-321c59b50da403f95b5e350272fcf1f1f8bf9ac3137400e7da57f213015b2dda-merged.mount: Deactivated successfully.
Oct 12 17:21:42 np0005481680 podman[268419]: 2025-10-12 21:21:42.107705382 +0000 UTC m=+1.145153325 container remove bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:21:42 np0005481680 systemd[1]: libpod-conmon-bfc908ba67420534126f7aa354fa6c0a9db1d84ebc14f5b61b3ddf36e83fc241.scope: Deactivated successfully.
Oct 12 17:21:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:21:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:21:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:42.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:21:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:42.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:43 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:21:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:21:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:21:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212144 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:21:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4000f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Oct 12 17:21:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:45 np0005481680 podman[268569]: 2025-10-12 21:21:45.152328645 +0000 UTC m=+0.104178928 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 12 17:21:45 np0005481680 podman[268570]: 2025-10-12 21:21:45.174830462 +0000 UTC m=+0.128201554 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:21:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Oct 12 17:21:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:47 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:47.144Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:21:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:47.144Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:21:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:47.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:21:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2425805547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:21:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2425805547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:21:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Oct 12 17:21:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:21:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:21:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:51 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:51 np0005481680 podman[268620]: 2025-10-12 21:21:51.124854001 +0000 UTC m=+0.087655505 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 12 17:21:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:52] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:21:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:21:52] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:21:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:21:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:21:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 12 17:21:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:53 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:21:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:54.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:21:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:21:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:21:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:21:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:21:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:57 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:21:57.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:21:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:21:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:21:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:21:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:21:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:21:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:21:58.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:21:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:21:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:22:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:01 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:02] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:22:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:02] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:22:02 np0005481680 podman[268678]: 2025-10-12 21:22:02.131101956 +0000 UTC m=+0.087512563 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 12 17:22:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:02.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:22:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:22:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:02.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:03 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:22:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:22:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:04.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:22:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:06.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:22:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:07 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:07.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:22:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:08.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:22:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:08.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:09 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:10.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:22:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:11 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:12] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:22:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:12] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Oct 12 17:22:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:12.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:15 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:16 np0005481680 podman[268714]: 2025-10-12 21:22:16.141494463 +0000 UTC m=+0.090686823 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:22:16 np0005481680 podman[268715]: 2025-10-12 21:22:16.189618146 +0000 UTC m=+0.137865132 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:22:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:16.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:17 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:17.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:22:18
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'vms', 'volumes', '.nfs', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', '.mgr']
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:22:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:22:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:22:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:22:18.357 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:22:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:22:18.359 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:22:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:22:18.359 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:22:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa8001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:22:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:20.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:22:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:21 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:22] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:22 np0005481680 podman[268789]: 2025-10-12 21:22:22.141522176 +0000 UTC m=+0.095530318 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:22:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:22.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:22.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:23 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:24.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:25 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:26.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:27.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:22:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:27 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:28.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:30 np0005481680 nova_compute[264665]: 2025-10-12 21:22:30.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:22:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:31 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.681 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.681 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.681 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.682 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.704 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.705 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:22:31 np0005481680 nova_compute[264665]: 2025-10-12 21:22:31.706 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:22:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:32] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:22:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129825223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.222 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:22:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.466 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:22:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.468 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.468 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.469 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.533 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.534 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:22:32 np0005481680 nova_compute[264665]: 2025-10-12 21:22:32.555 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:22:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:22:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949846542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:22:33 np0005481680 nova_compute[264665]: 2025-10-12 21:22:33.030 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:22:33 np0005481680 nova_compute[264665]: 2025-10-12 21:22:33.039 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:22:33 np0005481680 nova_compute[264665]: 2025-10-12 21:22:33.055 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:22:33 np0005481680 nova_compute[264665]: 2025-10-12 21:22:33.058 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:22:33 np0005481680 nova_compute[264665]: 2025-10-12 21:22:33.058 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:22:33 np0005481680 podman[268862]: 2025-10-12 21:22:33.12321772 +0000 UTC m=+0.068431573 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 12 17:22:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:33 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:22:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.040 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.057 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.058 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.058 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.058 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:22:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:34 np0005481680 nova_compute[264665]: 2025-10-12 21:22:34.676 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:22:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:36.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:37.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:22:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:37 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:38.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:39 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:22:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:42] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Oct 12 17:22:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:43 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.474391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163474476, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 251, "total_data_size": 1277547, "memory_usage": 1298368, "flush_reason": "Manual Compaction"}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163485641, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1264083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23812, "largest_seqno": 24630, "table_properties": {"data_size": 1259948, "index_size": 1851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9208, "raw_average_key_size": 19, "raw_value_size": 1251692, "raw_average_value_size": 2651, "num_data_blocks": 82, "num_entries": 472, "num_filter_entries": 472, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304096, "oldest_key_time": 1760304096, "file_creation_time": 1760304163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11295 microseconds, and 6200 cpu microseconds.
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.485699) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1264083 bytes OK
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.485722) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.487916) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.487940) EVENT_LOG_v1 {"time_micros": 1760304163487933, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.487962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1273579, prev total WAL file size 1273579, number of live WAL files 2.
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.489026) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1234KB)], [53(12MB)]
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163489109, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14412806, "oldest_snapshot_seqno": -1}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5315 keys, 12256910 bytes, temperature: kUnknown
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163557252, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12256910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12222280, "index_size": 20248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 136539, "raw_average_key_size": 25, "raw_value_size": 12126890, "raw_average_value_size": 2281, "num_data_blocks": 823, "num_entries": 5315, "num_filter_entries": 5315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.557743) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12256910 bytes
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.559207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.0 rd, 179.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 12.5 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(21.1) write-amplify(9.7) OK, records in: 5829, records dropped: 514 output_compression: NoCompression
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.559228) EVENT_LOG_v1 {"time_micros": 1760304163559219, "job": 28, "event": "compaction_finished", "compaction_time_micros": 68305, "compaction_time_cpu_micros": 29529, "output_level": 6, "num_output_files": 1, "total_output_size": 12256910, "num_input_records": 5829, "num_output_records": 5315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163560014, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304163562672, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.488937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.562772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.562780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.562783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.562786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:22:43.562789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:22:43 np0005481680 podman[269041]: 2025-10-12 21:22:43.611573512 +0000 UTC m=+0.098816472 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:22:43 np0005481680 podman[269041]: 2025-10-12 21:22:43.725629452 +0000 UTC m=+0.212872402 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:22:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:44 np0005481680 podman[269165]: 2025-10-12 21:22:44.42062766 +0000 UTC m=+0.096792820 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:44 np0005481680 podman[269165]: 2025-10-12 21:22:44.436489015 +0000 UTC m=+0.112654115 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:44 np0005481680 podman[269259]: 2025-10-12 21:22:44.95489312 +0000 UTC m=+0.088336873 container exec 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:22:44 np0005481680 podman[269259]: 2025-10-12 21:22:44.974510112 +0000 UTC m=+0.107953815 container exec_died 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:22:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab0001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:45 np0005481680 podman[269326]: 2025-10-12 21:22:45.299524145 +0000 UTC m=+0.085103180 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:22:45 np0005481680 podman[269326]: 2025-10-12 21:22:45.313450481 +0000 UTC m=+0.099029506 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:22:45 np0005481680 podman[269393]: 2025-10-12 21:22:45.644404616 +0000 UTC m=+0.080748659 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, version=2.2.4)
Oct 12 17:22:45 np0005481680 podman[269393]: 2025-10-12 21:22:45.66447585 +0000 UTC m=+0.100819893 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Oct 12 17:22:45 np0005481680 podman[269460]: 2025-10-12 21:22:45.994645185 +0000 UTC m=+0.084518936 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:46 np0005481680 podman[269460]: 2025-10-12 21:22:46.072013846 +0000 UTC m=+0.161887547 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:46.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:46 np0005481680 podman[269535]: 2025-10-12 21:22:46.40450704 +0000 UTC m=+0.087151413 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:22:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:46 np0005481680 podman[269535]: 2025-10-12 21:22:46.607671433 +0000 UTC m=+0.290315816 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:22:46 np0005481680 podman[269580]: 2025-10-12 21:22:46.805301853 +0000 UTC m=+0.119525772 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 12 17:22:46 np0005481680 podman[269581]: 2025-10-12 21:22:46.836309977 +0000 UTC m=+0.144321867 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:22:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:47.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:22:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:47.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:22:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:47.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:22:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:47 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:47 np0005481680 podman[269685]: 2025-10-12 21:22:47.222831305 +0000 UTC m=+0.092632253 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:47 np0005481680 podman[269685]: 2025-10-12 21:22:47.281923938 +0000 UTC m=+0.151724846 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:22:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:22:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:22:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c000d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:22:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4171289107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:22:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4171289107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:22:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=404 latency=0.002000050s ======
Oct 12 17:22:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:48.950 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000050s
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - - [12/Oct/2025:21:22:48.970 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct 12 17:22:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.113194751 +0000 UTC m=+0.068643289 container create d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:22:49 np0005481680 systemd[1]: Started libpod-conmon-d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b.scope.
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.083799788 +0000 UTC m=+0.039248386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.219149624 +0000 UTC m=+0.174598172 container init d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.229607431 +0000 UTC m=+0.185055969 container start d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.233684456 +0000 UTC m=+0.189132994 container attach d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:22:49 np0005481680 mystifying_turing[269922]: 167 167
Oct 12 17:22:49 np0005481680 systemd[1]: libpod-d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b.scope: Deactivated successfully.
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.238033807 +0000 UTC m=+0.193482355 container died d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:22:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-152959a61d257c070c8c6134533151abe82428aac3a199faf7dea892d221f056-merged.mount: Deactivated successfully.
Oct 12 17:22:49 np0005481680 podman[269905]: 2025-10-12 21:22:49.295977712 +0000 UTC m=+0.251426250 container remove d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_turing, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:22:49 np0005481680 systemd[1]: libpod-conmon-d995d0d17f3abe4d007a2451176b19abbc96098233c5c1edb25b226ad8fa943b.scope: Deactivated successfully.
Oct 12 17:22:49 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:22:49 np0005481680 podman[269947]: 2025-10-12 21:22:49.549281008 +0000 UTC m=+0.066859933 container create ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:22:49 np0005481680 systemd[1]: Started libpod-conmon-ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5.scope.
Oct 12 17:22:49 np0005481680 podman[269947]: 2025-10-12 21:22:49.522869631 +0000 UTC m=+0.040448606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:49 np0005481680 podman[269947]: 2025-10-12 21:22:49.657783706 +0000 UTC m=+0.175362691 container init ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 12 17:22:49 np0005481680 podman[269947]: 2025-10-12 21:22:49.668318956 +0000 UTC m=+0.185897891 container start ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:22:49 np0005481680 podman[269947]: 2025-10-12 21:22:49.672715718 +0000 UTC m=+0.190294653 container attach ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 12 17:22:50 np0005481680 busy_mirzakhani[269964]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:22:50 np0005481680 busy_mirzakhani[269964]: --> All data devices are unavailable
Oct 12 17:22:50 np0005481680 systemd[1]: libpod-ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5.scope: Deactivated successfully.
Oct 12 17:22:50 np0005481680 podman[269947]: 2025-10-12 21:22:50.132731738 +0000 UTC m=+0.650310663 container died ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:22:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-043291495aa06a99de14609fe2bc1b6a7ae1a65a64adbd55660d1e3244629aad-merged.mount: Deactivated successfully.
Oct 12 17:22:50 np0005481680 podman[269947]: 2025-10-12 21:22:50.189577524 +0000 UTC m=+0.707156459 container remove ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:22:50 np0005481680 systemd[1]: libpod-conmon-ef0da4669f793e4703f1e240f05a77aefb7d97865ef9886326bf80fbc8a4c5c5.scope: Deactivated successfully.
Oct 12 17:22:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c000d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 12 17:22:50 np0005481680 podman[270084]: 2025-10-12 21:22:50.978620478 +0000 UTC m=+0.066946455 container create 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 17:22:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:51 np0005481680 systemd[1]: Started libpod-conmon-5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706.scope.
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:50.951893654 +0000 UTC m=+0.040219671 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:51.110330831 +0000 UTC m=+0.198656958 container init 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:51.118800118 +0000 UTC m=+0.207126085 container start 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:51.122889792 +0000 UTC m=+0.211215819 container attach 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:22:51 np0005481680 relaxed_noether[270101]: 167 167
Oct 12 17:22:51 np0005481680 systemd[1]: libpod-5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706.scope: Deactivated successfully.
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:51.127511211 +0000 UTC m=+0.215837188 container died 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:22:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b8a4102e3a1d5c9601340082611ac136feb7a98d15bc1315550b4c03d0605c5d-merged.mount: Deactivated successfully.
Oct 12 17:22:51 np0005481680 podman[270084]: 2025-10-12 21:22:51.175056049 +0000 UTC m=+0.263382016 container remove 5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:22:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:51 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:51 np0005481680 systemd[1]: libpod-conmon-5bebf599aee846ba637474188fff3edde89ec8a2fe449cc07d0e2875552e4706.scope: Deactivated successfully.
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.428222442 +0000 UTC m=+0.066705760 container create f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:22:51 np0005481680 systemd[1]: Started libpod-conmon-f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d.scope.
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.401724943 +0000 UTC m=+0.040208321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9232771e2da1e91502949a74c389db8fbba54d7576cfcd634880dffe9eab45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9232771e2da1e91502949a74c389db8fbba54d7576cfcd634880dffe9eab45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9232771e2da1e91502949a74c389db8fbba54d7576cfcd634880dffe9eab45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:51 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9232771e2da1e91502949a74c389db8fbba54d7576cfcd634880dffe9eab45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.532947413 +0000 UTC m=+0.171430771 container init f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.547840964 +0000 UTC m=+0.186324292 container start f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.553084779 +0000 UTC m=+0.191568107 container attach f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]: {
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:    "0": [
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:        {
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "devices": [
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "/dev/loop3"
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            ],
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "lv_name": "ceph_lv0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "lv_size": "21470642176",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "name": "ceph_lv0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "tags": {
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.cluster_name": "ceph",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.crush_device_class": "",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.encrypted": "0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.osd_id": "0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.type": "block",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.vdo": "0",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:                "ceph.with_tpm": "0"
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            },
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "type": "block",
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:            "vg_name": "ceph_vg0"
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:        }
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]:    ]
Oct 12 17:22:51 np0005481680 wizardly_almeida[270142]: }
Oct 12 17:22:51 np0005481680 systemd[1]: libpod-f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d.scope: Deactivated successfully.
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.892601372 +0000 UTC m=+0.531084700 container died f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:22:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cd9232771e2da1e91502949a74c389db8fbba54d7576cfcd634880dffe9eab45-merged.mount: Deactivated successfully.
Oct 12 17:22:51 np0005481680 podman[270126]: 2025-10-12 21:22:51.947598261 +0000 UTC m=+0.586081579 container remove f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 12 17:22:51 np0005481680 systemd[1]: libpod-conmon-f3d8f1c75e616d5660d1fb93e1b3960884920c53078b6bf9fd9015f8a9b78a5d.scope: Deactivated successfully.
Oct 12 17:22:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:52] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:22:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:22:52] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:22:52 np0005481680 podman[270216]: 2025-10-12 21:22:52.312900284 +0000 UTC m=+0.086040363 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 12 17:22:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.760342823 +0000 UTC m=+0.063349253 container create 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:22:52 np0005481680 systemd[1]: Started libpod-conmon-87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907.scope.
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.726413774 +0000 UTC m=+0.029420244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.86645621 +0000 UTC m=+0.169462700 container init 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.877973305 +0000 UTC m=+0.180979735 container start 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.883014044 +0000 UTC m=+0.186020534 container attach 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 17:22:52 np0005481680 vigilant_kilby[270295]: 167 167
Oct 12 17:22:52 np0005481680 systemd[1]: libpod-87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907.scope: Deactivated successfully.
Oct 12 17:22:52 np0005481680 conmon[270295]: conmon 87bcdd1e2c5c9401a7d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907.scope/container/memory.events
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.887461418 +0000 UTC m=+0.190467888 container died 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:22:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:22:52 np0005481680 systemd[1]: var-lib-containers-storage-overlay-62b0a97d8d74686c1f75f8045e9620c68a6bb6078b9e6af7918f71c513fa3100-merged.mount: Deactivated successfully.
Oct 12 17:22:52 np0005481680 podman[270279]: 2025-10-12 21:22:52.940959698 +0000 UTC m=+0.243966138 container remove 87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:22:52 np0005481680 systemd[1]: libpod-conmon-87bcdd1e2c5c9401a7d0b3d420790bcbaf8c06abc1b54c81cbe7ae1572337907.scope: Deactivated successfully.
Oct 12 17:22:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:53 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:53 np0005481680 podman[270321]: 2025-10-12 21:22:53.186152826 +0000 UTC m=+0.066281598 container create 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:22:53 np0005481680 systemd[1]: Started libpod-conmon-4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96.scope.
Oct 12 17:22:53 np0005481680 podman[270321]: 2025-10-12 21:22:53.16010747 +0000 UTC m=+0.040236282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:22:53 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:22:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5ca37d825e42719ed815a1942cb8239eab32e372218fa001d0bd29f55161f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5ca37d825e42719ed815a1942cb8239eab32e372218fa001d0bd29f55161f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5ca37d825e42719ed815a1942cb8239eab32e372218fa001d0bd29f55161f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:53 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5ca37d825e42719ed815a1942cb8239eab32e372218fa001d0bd29f55161f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:22:53 np0005481680 podman[270321]: 2025-10-12 21:22:53.291807982 +0000 UTC m=+0.171936754 container init 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:22:53 np0005481680 podman[270321]: 2025-10-12 21:22:53.30736643 +0000 UTC m=+0.187495202 container start 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:22:53 np0005481680 podman[270321]: 2025-10-12 21:22:53.312136912 +0000 UTC m=+0.192265724 container attach 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:22:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 12 17:22:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 12 17:22:53 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 12 17:22:54 np0005481680 lvm[270415]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:22:54 np0005481680 lvm[270415]: VG ceph_vg0 finished
Oct 12 17:22:54 np0005481680 recursing_germain[270339]: {}
Oct 12 17:22:54 np0005481680 systemd[1]: libpod-4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96.scope: Deactivated successfully.
Oct 12 17:22:54 np0005481680 systemd[1]: libpod-4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96.scope: Consumed 1.668s CPU time.
Oct 12 17:22:54 np0005481680 podman[270418]: 2025-10-12 21:22:54.272858173 +0000 UTC m=+0.030487512 container died 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:22:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7c5ca37d825e42719ed815a1942cb8239eab32e372218fa001d0bd29f55161f5-merged.mount: Deactivated successfully.
Oct 12 17:22:54 np0005481680 podman[270418]: 2025-10-12 21:22:54.33093392 +0000 UTC m=+0.088563199 container remove 4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_germain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:22:54 np0005481680 systemd[1]: libpod-conmon-4329ee88adac76d842ddacd5d1954de4d97d43b43c4a51ebeeedd7f7ef0b7f96.scope: Deactivated successfully.
Oct 12 17:22:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 12 17:22:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 12 17:22:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:54 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 12 17:22:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Oct 12 17:22:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:55 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:22:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 12 17:22:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 12 17:22:55 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 12 17:22:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:22:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 682 B/s wr, 1 op/s
Oct 12 17:22:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:22:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:22:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:22:57.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:22:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:57 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 12 17:22:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 12 17:22:57 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 12 17:22:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212257 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:22:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:22:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:22:58.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:22:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:22:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 561 B/s rd, 748 B/s wr, 2 op/s
Oct 12 17:22:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:22:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:22:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:22:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:22:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:22:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 6.4 MiB/s wr, 59 op/s
Oct 12 17:23:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 12 17:23:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 12 17:23:01 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 12 17:23:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:01 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:02] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:23:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:02] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Oct 12 17:23:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 49 op/s
Oct 12 17:23:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:03 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:23:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:23:04 np0005481680 podman[270495]: 2025-10-12 21:23:04.136277172 +0000 UTC m=+0.090977890 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 12 17:23:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212304 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:23:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 12 17:23:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:23:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:06.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s
Oct 12 17:23:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:23:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:23:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:07.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:23:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:07 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:08.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:23:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:23:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Oct 12 17:23:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:09 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:23:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:23:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98003e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Oct 12 17:23:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:11.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:11 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:12] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Oct 12 17:23:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:12] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Oct 12 17:23:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 433 B/s wr, 1 op/s
Oct 12 17:23:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:23:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:14.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 12 17:23:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:15 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:16 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:16.256 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:23:16 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:16.258 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:23:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:23:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:23:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:16.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:23:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:23:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:17.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:17.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:23:17 np0005481680 podman[270527]: 2025-10-12 21:23:17.165876006 +0000 UTC m=+0.119285575 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 12 17:23:17 np0005481680 podman[270528]: 2025-10-12 21:23:17.185297794 +0000 UTC m=+0.133098509 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 12 17:23:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:17 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:23:18
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data']
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:23:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:23:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:23:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:18.358 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:18.359 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:18.359 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:23:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 12 17:23:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:19.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:23:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:23:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212319 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:23:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:23:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:23:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 12 17:23:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:21.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:21 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:22] "GET /metrics HTTP/1.1" 200 48317 "" "Prometheus/2.51.0"
Oct 12 17:23:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:22] "GET /metrics HTTP/1.1" 200 48317 "" "Prometheus/2.51.0"
Oct 12 17:23:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:23:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 12 17:23:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:23 np0005481680 podman[270603]: 2025-10-12 21:23:23.143290527 +0000 UTC m=+0.098558526 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 12 17:23:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:23 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:23 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:23.260 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212324 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:23:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 12 17:23:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:25 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:23:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:27.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:23:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:27.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:23:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:27 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:23:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:29.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.682 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.684 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.684 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 12 17:23:29 np0005481680 nova_compute[264665]: 2025-10-12 21:23:29.696 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 12 17:23:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:31 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:31 np0005481680 nova_compute[264665]: 2025-10-12 21:23:31.707 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:31 np0005481680 nova_compute[264665]: 2025-10-12 21:23:31.708 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:32] "GET /metrics HTTP/1.1" 200 48317 "" "Prometheus/2.51.0"
Oct 12 17:23:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:32] "GET /metrics HTTP/1.1" 200 48317 "" "Prometheus/2.51.0"
Oct 12 17:23:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.689 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.689 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:32 np0005481680 nova_compute[264665]: 2025-10-12 21:23:32.690 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:23:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:33 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:23:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.692 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.693 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.693 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.693 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:23:33 np0005481680 nova_compute[264665]: 2025-10-12 21:23:33.694 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:23:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309691871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.198 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.404 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.406 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.406 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.407 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:34.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.548 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.548 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.829 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing inventories for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.847 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating ProviderTree inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.847 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.861 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing aggregate associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.880 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing trait associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 12 17:23:34 np0005481680 nova_compute[264665]: 2025-10-12 21:23:34.895 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 12 17:23:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:35 np0005481680 podman[270659]: 2025-10-12 21:23:35.158769853 +0000 UTC m=+0.115655112 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:23:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:23:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878426335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.361 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.366 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.381 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.383 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.383 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.486 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.487 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.508 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.640 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.641 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.648 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.648 2 INFO nova.compute.claims [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:23:35 np0005481680 nova_compute[264665]: 2025-10-12 21:23:35.760 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:23:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724580724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.234 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.244 2 DEBUG nova.compute.provider_tree [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.263 2 DEBUG nova.scheduler.client.report [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.295 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.297 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.353 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.353 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.379 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.381 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.382 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.383 2 INFO nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.404 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:23:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:36.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.504 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.506 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.507 2 INFO nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Creating image(s)#033[00m
Oct 12 17:23:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.547 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.584 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.613 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.617 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.618 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:23:36 np0005481680 nova_compute[264665]: 2025-10-12 21:23:36.957 2 DEBUG nova.virt.libvirt.imagebackend [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image locations are: [{'url': 'rbd://5adb8c35-1b74-5730-a252-62321f654cd5/images/0838cede-7f25-4ac2-ae16-04e86e2d6b46/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://5adb8c35-1b74-5730-a252-62321f654cd5/images/0838cede-7f25-4ac2-ae16-04e86e2d6b46/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 12 17:23:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:37.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:23:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:37.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:23:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:37 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:37 np0005481680 nova_compute[264665]: 2025-10-12 21:23:37.625 2 WARNING oslo_policy.policy [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 12 17:23:37 np0005481680 nova_compute[264665]: 2025-10-12 21:23:37.625 2 WARNING oslo_policy.policy [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 12 17:23:37 np0005481680 nova_compute[264665]: 2025-10-12 21:23:37.631 2 DEBUG nova.policy [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:23:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:38.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:38 np0005481680 nova_compute[264665]: 2025-10-12 21:23:38.673 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:38 np0005481680 nova_compute[264665]: 2025-10-12 21:23:38.762 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.part --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:38 np0005481680 nova_compute[264665]: 2025-10-12 21:23:38.764 2 DEBUG nova.virt.images [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] 0838cede-7f25-4ac2-ae16-04e86e2d6b46 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 12 17:23:38 np0005481680 nova_compute[264665]: 2025-10-12 21:23:38.766 2 DEBUG nova.privsep.utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 12 17:23:38 np0005481680 nova_compute[264665]: 2025-10-12 21:23:38.767 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.part /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.034 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.part /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.converted" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.040 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.125 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d.converted --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.128 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.178 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.183 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:39 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:39 np0005481680 nova_compute[264665]: 2025-10-12 21:23:39.472 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Successfully created port: 7087c316-8bc6-4ae4-a39d-10fad6139d2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:23:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 12 17:23:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 12 17:23:40 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 12 17:23:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Oct 12 17:23:40 np0005481680 nova_compute[264665]: 2025-10-12 21:23:40.974 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Successfully updated port: 7087c316-8bc6-4ae4-a39d-10fad6139d2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:23:40 np0005481680 nova_compute[264665]: 2025-10-12 21:23:40.994 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:23:40 np0005481680 nova_compute[264665]: 2025-10-12 21:23:40.995 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:23:40 np0005481680 nova_compute[264665]: 2025-10-12 21:23:40.995 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:23:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 12 17:23:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 12 17:23:41 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 12 17:23:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.393 2 DEBUG nova.compute.manager [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.393 2 DEBUG nova.compute.manager [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing instance network info cache due to event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.393 2 DEBUG oslo_concurrency.lockutils [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.416 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.502 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.584 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.652 2 DEBUG nova.objects.instance [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid 272e54e6-8c70-4d93-838c-b6511e1a9a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.676 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.676 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Ensure instance console log exists: /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.677 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.677 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:41 np0005481680 nova_compute[264665]: 2025-10-12 21:23:41.678 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:42] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Oct 12 17:23:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:42] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Oct 12 17:23:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.766 2 DEBUG nova.network.neutron [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.802 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.802 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Instance network_info: |[{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.803 2 DEBUG oslo_concurrency.lockutils [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.803 2 DEBUG nova.network.neutron [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.805 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Start _get_guest_xml network_info=[{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.811 2 WARNING nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.816 2 DEBUG nova.virt.libvirt.host [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.816 2 DEBUG nova.virt.libvirt.host [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.820 2 DEBUG nova.virt.libvirt.host [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.820 2 DEBUG nova.virt.libvirt.host [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.821 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.821 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.821 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.821 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.822 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.822 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.822 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.822 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.822 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.823 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.823 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.823 2 DEBUG nova.virt.hardware [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.826 2 DEBUG nova.privsep.utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 12 17:23:42 np0005481680 nova_compute[264665]: 2025-10-12 21:23:42.827 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Oct 12 17:23:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:43.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:43 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:23:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598518843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.347 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.390 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.396 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:23:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691031192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.905 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.908 2 DEBUG nova.virt.libvirt.vif [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1130035828',display_name='tempest-TestNetworkBasicOps-server-1130035828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1130035828',id=1,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaoxoysQrc+7voGJH9+95zvBEIx8T8j27vK54pA8C5IkKm6egwZlxQ/RFTI5+QcGyvz5wcpnBScK+cserfjr2xL4tIWlrufZ6VInpDPrirN0ndQueVA6v2+Zc1DF6Zdeg==',key_name='tempest-TestNetworkBasicOps-1601546160',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-ddym0l0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:23:36Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=272e54e6-8c70-4d93-838c-b6511e1a9a61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.908 2 DEBUG nova.network.os_vif_util [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.909 2 DEBUG nova.network.os_vif_util [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.911 2 DEBUG nova.objects.instance [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 272e54e6-8c70-4d93-838c-b6511e1a9a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.928 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <uuid>272e54e6-8c70-4d93-838c-b6511e1a9a61</uuid>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <name>instance-00000001</name>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-1130035828</nova:name>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:23:42</nova:creationTime>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <nova:port uuid="7087c316-8bc6-4ae4-a39d-10fad6139d2b">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="serial">272e54e6-8c70-4d93-838c-b6511e1a9a61</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="uuid">272e54e6-8c70-4d93-838c-b6511e1a9a61</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/272e54e6-8c70-4d93-838c-b6511e1a9a61_disk">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:17:aa:84"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <target dev="tap7087c316-8b"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/console.log" append="off"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:23:43 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:23:43 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:23:43 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:23:43 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.929 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Preparing to wait for external event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.930 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.930 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.930 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.931 2 DEBUG nova.virt.libvirt.vif [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1130035828',display_name='tempest-TestNetworkBasicOps-server-1130035828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1130035828',id=1,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaoxoysQrc+7voGJH9+95zvBEIx8T8j27vK54pA8C5IkKm6egwZlxQ/RFTI5+QcGyvz5wcpnBScK+cserfjr2xL4tIWlrufZ6VInpDPrirN0ndQueVA6v2+Zc1DF6Zdeg==',key_name='tempest-TestNetworkBasicOps-1601546160',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-ddym0l0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:23:36Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=272e54e6-8c70-4d93-838c-b6511e1a9a61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.931 2 DEBUG nova.network.os_vif_util [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.932 2 DEBUG nova.network.os_vif_util [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.932 2 DEBUG os_vif [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.996 2 DEBUG ovsdbapp.backend.ovs_idl [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.996 2 DEBUG ovsdbapp.backend.ovs_idl [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.997 2 DEBUG ovsdbapp.backend.ovs_idl [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 12 17:23:43 np0005481680 nova_compute[264665]: 2025-10-12 21:23:43.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.015 2 INFO oslo.privsep.daemon [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpaoaxwb7w/privsep.sock']#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.201 2 DEBUG nova.network.neutron [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updated VIF entry in instance network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.202 2 DEBUG nova.network.neutron [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.224 2 DEBUG oslo_concurrency.lockutils [req-b2ca889c-d84f-464f-a530-dc6a5fa49f82 req-1028da10-f457-4fec-8c8b-4bcc7d064427 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:23:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 88 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.994 2 INFO oslo.privsep.daemon [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.878 563 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.884 563 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.888 563 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 12 17:23:44 np0005481680 nova_compute[264665]: 2025-10-12 21:23:44.888 563 INFO oslo.privsep.daemon [-] privsep daemon running as pid 563#033[00m
Oct 12 17:23:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7087c316-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7087c316-8b, col_values=(('external_ids', {'iface-id': '7087c316-8bc6-4ae4-a39d-10fad6139d2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:aa:84', 'vm-uuid': '272e54e6-8c70-4d93-838c-b6511e1a9a61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:45 np0005481680 NetworkManager[44859]: <info>  [1760304225.3229] manager: (tap7087c316-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.333 2 INFO os_vif [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b')#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.389 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.389 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.389 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:17:aa:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.390 2 INFO nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Using config drive#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.431 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.874 2 INFO nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Creating config drive at /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config#033[00m
Oct 12 17:23:45 np0005481680 nova_compute[264665]: 2025-10-12 21:23:45.883 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_9ui14 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.034 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_9ui14" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.075 2 DEBUG nova.storage.rbd_utils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.079 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:23:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 12 17:23:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 12 17:23:46 np0005481680 ceph-mon[73608]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.287 2 DEBUG oslo_concurrency.processutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config 272e54e6-8c70-4d93-838c-b6511e1a9a61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.288 2 INFO nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Deleting local config drive /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61/disk.config because it was imported into RBD.#033[00m
Oct 12 17:23:46 np0005481680 systemd[1]: Starting libvirt secret daemon...
Oct 12 17:23:46 np0005481680 systemd[1]: Started libvirt secret daemon.
Oct 12 17:23:46 np0005481680 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 12 17:23:46 np0005481680 kernel: tap7087c316-8b: entered promiscuous mode
Oct 12 17:23:46 np0005481680 NetworkManager[44859]: <info>  [1760304226.4475] manager: (tap7087c316-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 12 17:23:46 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:46Z|00027|binding|INFO|Claiming lport 7087c316-8bc6-4ae4-a39d-10fad6139d2b for this chassis.
Oct 12 17:23:46 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:46Z|00028|binding|INFO|7087c316-8bc6-4ae4-a39d-10fad6139d2b: Claiming fa:16:3e:17:aa:84 10.100.0.9
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:46 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:46.475 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:aa:84 10.100.0.9'], port_security=['fa:16:3e:17:aa:84 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '272e54e6-8c70-4d93-838c-b6511e1a9a61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb8e0c26-7a4c-492b-92e7-613512ada910', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45ae961f-5a05-4a7d-be11-726aef1ceda0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=218bc91e-511f-4a31-8fe3-010bc033ff95, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=7087c316-8bc6-4ae4-a39d-10fad6139d2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:23:46 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:46.477 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 7087c316-8bc6-4ae4-a39d-10fad6139d2b in datapath eb8e0c26-7a4c-492b-92e7-613512ada910 bound to our chassis#033[00m
Oct 12 17:23:46 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:46.479 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb8e0c26-7a4c-492b-92e7-613512ada910#033[00m
Oct 12 17:23:46 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:46.480 164459 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpjfec9gc7/privsep.sock']#033[00m
Oct 12 17:23:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:46 np0005481680 systemd-udevd[271098]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:23:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:46 np0005481680 NetworkManager[44859]: <info>  [1760304226.5173] device (tap7087c316-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:23:46 np0005481680 NetworkManager[44859]: <info>  [1760304226.5185] device (tap7087c316-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:23:46 np0005481680 systemd-machined[218338]: New machine qemu-1-instance-00000001.
Oct 12 17:23:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:46 np0005481680 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:46 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:46Z|00029|binding|INFO|Setting lport 7087c316-8bc6-4ae4-a39d-10fad6139d2b ovn-installed in OVS
Oct 12 17:23:46 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:46Z|00030|binding|INFO|Setting lport 7087c316-8bc6-4ae4-a39d-10fad6139d2b up in Southbound
Oct 12 17:23:46 np0005481680 nova_compute[264665]: 2025-10-12 21:23:46.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 88 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 MiB/s wr, 47 op/s
Oct 12 17:23:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:47.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:47.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:23:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:47.158Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:23:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:47.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:23:47 np0005481680 nova_compute[264665]: 2025-10-12 21:23:47.168 2 DEBUG nova.compute.manager [req-2ddea007-66e1-49dc-95b3-7b3d61bc1477 req-989e5bbf-2003-4284-8d92-2b81be2b422b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:23:47 np0005481680 nova_compute[264665]: 2025-10-12 21:23:47.169 2 DEBUG oslo_concurrency.lockutils [req-2ddea007-66e1-49dc-95b3-7b3d61bc1477 req-989e5bbf-2003-4284-8d92-2b81be2b422b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:47 np0005481680 nova_compute[264665]: 2025-10-12 21:23:47.170 2 DEBUG oslo_concurrency.lockutils [req-2ddea007-66e1-49dc-95b3-7b3d61bc1477 req-989e5bbf-2003-4284-8d92-2b81be2b422b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:47 np0005481680 nova_compute[264665]: 2025-10-12 21:23:47.171 2 DEBUG oslo_concurrency.lockutils [req-2ddea007-66e1-49dc-95b3-7b3d61bc1477 req-989e5bbf-2003-4284-8d92-2b81be2b422b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:47 np0005481680 nova_compute[264665]: 2025-10-12 21:23:47.171 2 DEBUG nova.compute.manager [req-2ddea007-66e1-49dc-95b3-7b3d61bc1477 req-989e5bbf-2003-4284-8d92-2b81be2b422b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Processing event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:23:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:47 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.325 164459 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.326 164459 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpjfec9gc7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.196 271121 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.202 271121 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.206 271121 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.206 271121 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271121#033[00m
Oct 12 17:23:47 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:47.331 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[6442af35-53ec-42ce-935b-6ef9d67e0502]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.023 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.028 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304228.024705, 272e54e6-8c70-4d93-838c-b6511e1a9a61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.029 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] VM Started (Lifecycle Event)#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.033 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.038 2 INFO nova.virt.libvirt.driver [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Instance spawned successfully.#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.039 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.065 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.069 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.100 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.100 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304228.024805, 272e54e6-8c70-4d93-838c-b6511e1a9a61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.100 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.121 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.128 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304228.032247, 272e54e6-8c70-4d93-838c-b6511e1a9a61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.128 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.132 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.133 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.133 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.134 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.135 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 podman[271162]: 2025-10-12 21:23:48.135312077 +0000 UTC m=+0.098899123 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.135 2 DEBUG nova.virt.libvirt.driver [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.147 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.151 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:23:48 np0005481680 podman[271163]: 2025-10-12 21:23:48.155129456 +0000 UTC m=+0.119953694 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.201 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.238 2 INFO nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Took 11.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.239 2 DEBUG nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.332 2 INFO nova.compute.manager [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Took 12.75 seconds to build instance.#033[00m
Oct 12 17:23:48 np0005481680 nova_compute[264665]: 2025-10-12 21:23:48.365 2 DEBUG oslo_concurrency.lockutils [None req-856269c2-9c34-4316-bf77-ddc23b82b66f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:23:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:48.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1060547655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:23:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1060547655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:23:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:48 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:48.593 271121 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:48 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:48.594 271121 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:48 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:48.594 271121 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 88 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Oct 12 17:23:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.246 2 DEBUG nova.compute.manager [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.246 2 DEBUG oslo_concurrency.lockutils [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.247 2 DEBUG oslo_concurrency.lockutils [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.247 2 DEBUG oslo_concurrency.lockutils [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.247 2 DEBUG nova.compute.manager [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] No waiting events found dispatching network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.247 2 WARNING nova.compute.manager [req-e8fefd68-abe7-49e7-bff7-be99814e13c6 req-84750693-8a41-4474-8994-b97789889e27 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received unexpected event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b for instance with vm_state active and task_state None.#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.819 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[22829c70-7aa9-40a7-8449-ed87a5196405]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.821 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb8e0c26-71 in ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.823 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb8e0c26-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.824 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9b0d59-42d7-4b24-9404-4c65c94d9c83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.829 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[3191c561-1728-449c-8b7f-5058e9cff9b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:49 np0005481680 nova_compute[264665]: 2025-10-12 21:23:49.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.869 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[17af070c-e2a1-436b-b6b3-e59543ddacb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.924 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[a65d38e7-ad6b-416f-a409-e8d0d2d451ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:49 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:49.927 164459 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpm1dstogm/privsep.sock']#033[00m
Oct 12 17:23:50 np0005481680 nova_compute[264665]: 2025-10-12 21:23:50.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:23:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:50.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:23:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.581 164459 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.582 164459 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm1dstogm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.470 271215 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.474 271215 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.476 271215 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.477 271215 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271215#033[00m
Oct 12 17:23:50 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:50.586 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[c601f41f-3bd7-40ea-b950-0aadb0555a3f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.076 271215 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.076 271215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.076 271215 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:23:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:51.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:51 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.785 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[f99920af-910f-4f5d-b1ad-cbe23768fe28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 NetworkManager[44859]: <info>  [1760304231.7944] manager: (tapeb8e0c26-70): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.799 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[84bf047c-ef1e-4d59-a3af-1d0107cde713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 systemd-udevd[271227]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.846 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4d8263-2407-43fe-ab78-355121f9df3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.852 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[83194557-f24b-4162-ac6f-8ed083907c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 NetworkManager[44859]: <info>  [1760304231.8999] device (tapeb8e0c26-70): carrier: link connected
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.920 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0ac23e-f11c-4436-9744-56fa30abca3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.937 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[2f60ef62-4d20-470b-9942-6f8c1b00cc18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb8e0c26-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:27:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389853, 'reachable_time': 44741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271247, 'error': None, 'target': 'ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.952 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d12f3930-fd8b-4241-8bba-97de2a4f5ea7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:270b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389853, 'tstamp': 389853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271248, 'error': None, 'target': 'ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:51 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.967 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[68a4b74b-0b87-49b6-877a-8b2206f95f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb8e0c26-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:27:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389853, 'reachable_time': 44741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271249, 'error': None, 'target': 'ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:51.999 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[14b4d178-19cf-408e-8b14-66493a21686c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:52] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 12 17:23:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:23:52] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.086 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6a5713-2f69-4154-b339-b328bf0e0aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.088 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb8e0c26-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.089 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.089 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb8e0c26-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:52 np0005481680 kernel: tapeb8e0c26-70: entered promiscuous mode
Oct 12 17:23:52 np0005481680 NetworkManager[44859]: <info>  [1760304232.0925] manager: (tapeb8e0c26-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 12 17:23:52 np0005481680 nova_compute[264665]: 2025-10-12 21:23:52.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:52 np0005481680 nova_compute[264665]: 2025-10-12 21:23:52.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.099 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb8e0c26-70, col_values=(('external_ids', {'iface-id': 'fc66f074-81fa-4e66-9e8f-de55158f2451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:23:52 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:52Z|00031|binding|INFO|Releasing lport fc66f074-81fa-4e66-9e8f-de55158f2451 from this chassis (sb_readonly=0)
Oct 12 17:23:52 np0005481680 nova_compute[264665]: 2025-10-12 21:23:52.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:52 np0005481680 nova_compute[264665]: 2025-10-12 21:23:52.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.130 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb8e0c26-7a4c-492b-92e7-613512ada910.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb8e0c26-7a4c-492b-92e7-613512ada910.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.131 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf9c075-9f6b-4fe5-9986-f29b71935785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.132 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-eb8e0c26-7a4c-492b-92e7-613512ada910
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/eb8e0c26-7a4c-492b-92e7-613512ada910.pid.haproxy
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID eb8e0c26-7a4c-492b-92e7-613512ada910
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:23:52 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:23:52.133 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910', 'env', 'PROCESS_TAG=haproxy-eb8e0c26-7a4c-492b-92e7-613512ada910', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb8e0c26-7a4c-492b-92e7-613512ada910.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:23:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:52.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:52 np0005481680 podman[271282]: 2025-10-12 21:23:52.609371476 +0000 UTC m=+0.084539244 container create 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 12 17:23:52 np0005481680 podman[271282]: 2025-10-12 21:23:52.566734779 +0000 UTC m=+0.041902647 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:23:52 np0005481680 systemd[1]: Started libpod-conmon-08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7.scope.
Oct 12 17:23:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:23:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8872de8395f5ba3cab9d653d9fce7d31010ca6f5e68c4da037bb492a87a83153/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:52 np0005481680 podman[271282]: 2025-10-12 21:23:52.708273807 +0000 UTC m=+0.183441625 container init 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 12 17:23:52 np0005481680 podman[271282]: 2025-10-12 21:23:52.71889191 +0000 UTC m=+0.194059698 container start 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:23:52 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [NOTICE]   (271302) : New worker (271304) forked
Oct 12 17:23:52 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [NOTICE]   (271302) : Loading success.
Oct 12 17:23:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct 12 17:23:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:53.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:53 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:53 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:53Z|00032|binding|INFO|Releasing lport fc66f074-81fa-4e66-9e8f-de55158f2451 from this chassis (sb_readonly=0)
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6930] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6932] device (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6941] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6943] device (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6950] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6954] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 12 17:23:53 np0005481680 nova_compute[264665]: 2025-10-12 21:23:53.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6957] device (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 12 17:23:53 np0005481680 NetworkManager[44859]: <info>  [1760304233.6976] device (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 12 17:23:53 np0005481680 ovn_controller[154617]: 2025-10-12T21:23:53Z|00033|binding|INFO|Releasing lport fc66f074-81fa-4e66-9e8f-de55158f2451 from this chassis (sb_readonly=0)
Oct 12 17:23:53 np0005481680 nova_compute[264665]: 2025-10-12 21:23:53.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:53 np0005481680 nova_compute[264665]: 2025-10-12 21:23:53.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.080 2 DEBUG nova.compute.manager [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.080 2 DEBUG nova.compute.manager [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing instance network info cache due to event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.081 2 DEBUG oslo_concurrency.lockutils [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.081 2 DEBUG oslo_concurrency.lockutils [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.081 2 DEBUG nova.network.neutron [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:23:54 np0005481680 podman[271316]: 2025-10-12 21:23:54.135489262 +0000 UTC m=+0.098959545 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 12 17:23:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:54 np0005481680 nova_compute[264665]: 2025-10-12 21:23:54.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 12 17:23:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:55.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:55 np0005481680 nova_compute[264665]: 2025-10-12 21:23:55.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:23:55 np0005481680 nova_compute[264665]: 2025-10-12 21:23:55.739 2 DEBUG nova.network.neutron [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updated VIF entry in instance network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:23:55 np0005481680 nova_compute[264665]: 2025-10-12 21:23:55.740 2 DEBUG nova.network.neutron [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:23:55 np0005481680 nova_compute[264665]: 2025-10-12 21:23:55.767 2 DEBUG oslo_concurrency.lockutils [req-d12bedca-7b77-4bca-9562-1d13672dc596 req-a42c7d0b-23bb-46d5-afa8-8ea293f2d47a 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:23:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:23:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:23:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:23:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:23:56 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:23:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:23:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.672150756 +0000 UTC m=+0.069518518 container create 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:23:56 np0005481680 systemd[1]: Started libpod-conmon-18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1.scope.
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.644785902 +0000 UTC m=+0.042153714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:23:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.786405781 +0000 UTC m=+0.183773583 container init 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.798433881 +0000 UTC m=+0.195801623 container start 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.802642539 +0000 UTC m=+0.200010351 container attach 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:23:56 np0005481680 pedantic_wu[271525]: 167 167
Oct 12 17:23:56 np0005481680 systemd[1]: libpod-18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1.scope: Deactivated successfully.
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.808151921 +0000 UTC m=+0.205519673 container died 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:23:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d0b31959fa111b45208a9c94c13253b670063046c9bdfc618acef12a31b968e9-merged.mount: Deactivated successfully.
Oct 12 17:23:56 np0005481680 podman[271508]: 2025-10-12 21:23:56.861961533 +0000 UTC m=+0.259329295 container remove 18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_wu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:23:56 np0005481680 systemd[1]: libpod-conmon-18b5e91a82119b90865c7d21af7313d4ee32ceb8c9d19ec04baee992b91b50b1.scope: Deactivated successfully.
Oct 12 17:23:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Oct 12 17:23:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:57.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.127462966 +0000 UTC m=+0.077228755 container create ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:23:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:23:57.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.094875059 +0000 UTC m=+0.044640908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:23:57 np0005481680 systemd[1]: Started libpod-conmon-ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559.scope.
Oct 12 17:23:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:23:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:57 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.285994779 +0000 UTC m=+0.235760578 container init ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.297950097 +0000 UTC m=+0.247715876 container start ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.302265098 +0000 UTC m=+0.252030927 container attach ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:23:57 np0005481680 optimistic_swartz[271567]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:23:57 np0005481680 optimistic_swartz[271567]: --> All data devices are unavailable
Oct 12 17:23:57 np0005481680 systemd[1]: libpod-ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559.scope: Deactivated successfully.
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.719118869 +0000 UTC m=+0.668884648 container died ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:23:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b42da4476bbe098e9514359273a1b9869d25c25756fe3de8e612470abef09f1e-merged.mount: Deactivated successfully.
Oct 12 17:23:57 np0005481680 podman[271549]: 2025-10-12 21:23:57.777389877 +0000 UTC m=+0.727155626 container remove ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:23:57 np0005481680 systemd[1]: libpod-conmon-ccf9239f0ac98db62f9547f16e1c2145cd3dedbbf370a7a085857efa347d1559.scope: Deactivated successfully.
Oct 12 17:23:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:23:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:23:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:23:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.552018532 +0000 UTC m=+0.042751769 container create cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:23:58 np0005481680 systemd[1]: Started libpod-conmon-cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390.scope.
Oct 12 17:23:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.53322637 +0000 UTC m=+0.023959587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.639594773 +0000 UTC m=+0.130328020 container init cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.654585188 +0000 UTC m=+0.145318425 container start cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.659010632 +0000 UTC m=+0.149743869 container attach cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:23:58 np0005481680 epic_babbage[271702]: 167 167
Oct 12 17:23:58 np0005481680 systemd[1]: libpod-cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390.scope: Deactivated successfully.
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.663047556 +0000 UTC m=+0.153780793 container died cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:23:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3b158eaa1492ac67aa0416ff011e92d7ea4eb1f407291a62fea54a9fd0eb1dc6-merged.mount: Deactivated successfully.
Oct 12 17:23:58 np0005481680 podman[271686]: 2025-10-12 21:23:58.711646644 +0000 UTC m=+0.202379841 container remove cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:23:58 np0005481680 systemd[1]: libpod-conmon-cbc864878bb81b3ba2e194c09eec897c11c70f8c682a8b02fff6766afe239390.scope: Deactivated successfully.
Oct 12 17:23:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:23:58 np0005481680 podman[271726]: 2025-10-12 21:23:58.960879798 +0000 UTC m=+0.066281733 container create e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:23:59 np0005481680 systemd[1]: Started libpod-conmon-e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954.scope.
Oct 12 17:23:59 np0005481680 podman[271726]: 2025-10-12 21:23:58.934559102 +0000 UTC m=+0.039961037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:23:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:23:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec03b66da47f77f5d0fd1218bbc46a7bba0d59beb8fe2c223eab9aef00a5cb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec03b66da47f77f5d0fd1218bbc46a7bba0d59beb8fe2c223eab9aef00a5cb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec03b66da47f77f5d0fd1218bbc46a7bba0d59beb8fe2c223eab9aef00a5cb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec03b66da47f77f5d0fd1218bbc46a7bba0d59beb8fe2c223eab9aef00a5cb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:23:59 np0005481680 podman[271726]: 2025-10-12 21:23:59.066927404 +0000 UTC m=+0.172329399 container init e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:23:59 np0005481680 podman[271726]: 2025-10-12 21:23:59.077739622 +0000 UTC m=+0.183141547 container start e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:23:59 np0005481680 podman[271726]: 2025-10-12 21:23:59.083320385 +0000 UTC m=+0.188722380 container attach e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:23:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:23:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:23:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:23:59.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:23:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:23:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:23:59 np0005481680 practical_joliot[271742]: {
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:    "0": [
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:        {
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "devices": [
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "/dev/loop3"
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            ],
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "lv_name": "ceph_lv0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "lv_size": "21470642176",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "name": "ceph_lv0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "tags": {
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.cluster_name": "ceph",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.crush_device_class": "",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.encrypted": "0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.osd_id": "0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.type": "block",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.vdo": "0",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:                "ceph.with_tpm": "0"
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            },
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "type": "block",
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:            "vg_name": "ceph_vg0"
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:        }
Oct 12 17:23:59 np0005481680 practical_joliot[271742]:    ]
Oct 12 17:23:59 np0005481680 practical_joliot[271742]: }
Oct 12 17:23:59 np0005481680 systemd[1]: libpod-e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954.scope: Deactivated successfully.
Oct 12 17:23:59 np0005481680 conmon[271742]: conmon e4ece71151d62e4038e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954.scope/container/memory.events
Oct 12 17:23:59 np0005481680 podman[271752]: 2025-10-12 21:23:59.446573739 +0000 UTC m=+0.024901431 container died e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:23:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8ec03b66da47f77f5d0fd1218bbc46a7bba0d59beb8fe2c223eab9aef00a5cb8-merged.mount: Deactivated successfully.
Oct 12 17:23:59 np0005481680 podman[271752]: 2025-10-12 21:23:59.493871385 +0000 UTC m=+0.072199057 container remove e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:23:59 np0005481680 systemd[1]: libpod-conmon-e4ece71151d62e4038e5b4f977383506b883639256043faa1878de8ad8930954.scope: Deactivated successfully.
Oct 12 17:23:59 np0005481680 nova_compute[264665]: 2025-10-12 21:23:59.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.212312297 +0000 UTC m=+0.052417968 container create 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:24:00 np0005481680 systemd[1]: Started libpod-conmon-84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892.scope.
Oct 12 17:24:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.289332506 +0000 UTC m=+0.129438157 container init 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.195214297 +0000 UTC m=+0.035319948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.295219197 +0000 UTC m=+0.135324868 container start 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.299598809 +0000 UTC m=+0.139704460 container attach 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:24:00 np0005481680 happy_booth[271876]: 167 167
Oct 12 17:24:00 np0005481680 systemd[1]: libpod-84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892.scope: Deactivated successfully.
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.302677589 +0000 UTC m=+0.142783250 container died 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:24:00 np0005481680 nova_compute[264665]: 2025-10-12 21:24:00.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b610d0616b48c6223917e91ffa5df9277262a169f048072430101aa457633013-merged.mount: Deactivated successfully.
Oct 12 17:24:00 np0005481680 podman[271860]: 2025-10-12 21:24:00.362585088 +0000 UTC m=+0.202690719 container remove 84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:24:00 np0005481680 systemd[1]: libpod-conmon-84bedbeb930632c83d0b36f6caadfeb5a5ff52690d1ac9c25a76a4c38edd1892.scope: Deactivated successfully.
Oct 12 17:24:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:24:00Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:aa:84 10.100.0.9
Oct 12 17:24:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:24:00Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:aa:84 10.100.0.9
Oct 12 17:24:00 np0005481680 podman[271901]: 2025-10-12 21:24:00.629132008 +0000 UTC m=+0.074559907 container create cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:24:00 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 12 17:24:00 np0005481680 systemd[1]: Started libpod-conmon-cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343.scope.
Oct 12 17:24:00 np0005481680 podman[271901]: 2025-10-12 21:24:00.600545103 +0000 UTC m=+0.045973052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:24:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:24:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32aa9bbe0e1d1863bd77d276623ce597ce84bdc26ec6586b8a1e75407f60098/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:24:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32aa9bbe0e1d1863bd77d276623ce597ce84bdc26ec6586b8a1e75407f60098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:24:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32aa9bbe0e1d1863bd77d276623ce597ce84bdc26ec6586b8a1e75407f60098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:24:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32aa9bbe0e1d1863bd77d276623ce597ce84bdc26ec6586b8a1e75407f60098/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:24:00 np0005481680 podman[271901]: 2025-10-12 21:24:00.742732327 +0000 UTC m=+0.188160216 container init cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 17:24:00 np0005481680 podman[271901]: 2025-10-12 21:24:00.759029226 +0000 UTC m=+0.204457115 container start cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 17:24:00 np0005481680 podman[271901]: 2025-10-12 21:24:00.765259456 +0000 UTC m=+0.210687355 container attach cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:24:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 109 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct 12 17:24:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:01 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:01 np0005481680 lvm[271994]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:24:01 np0005481680 lvm[271994]: VG ceph_vg0 finished
Oct 12 17:24:01 np0005481680 stoic_nobel[271918]: {}
Oct 12 17:24:01 np0005481680 systemd[1]: libpod-cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343.scope: Deactivated successfully.
Oct 12 17:24:01 np0005481680 systemd[1]: libpod-cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343.scope: Consumed 1.420s CPU time.
Oct 12 17:24:01 np0005481680 podman[271901]: 2025-10-12 21:24:01.706982965 +0000 UTC m=+1.152410864 container died cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 17:24:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e32aa9bbe0e1d1863bd77d276623ce597ce84bdc26ec6586b8a1e75407f60098-merged.mount: Deactivated successfully.
Oct 12 17:24:01 np0005481680 podman[271901]: 2025-10-12 21:24:01.771265926 +0000 UTC m=+1.216693825 container remove cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_nobel, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 17:24:01 np0005481680 systemd[1]: libpod-conmon-cec1359c4501432d40bd97bb4291855388921df52f058e57779365d10f044343.scope: Deactivated successfully.
Oct 12 17:24:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:24:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:24:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:24:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:24:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:02] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 12 17:24:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:02] "GET /metrics HTTP/1.1" 200 48387 "" "Prometheus/2.51.0"
Oct 12 17:24:02 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:24:02 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:24:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 109 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 777 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct 12 17:24:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:03 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:24:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:24:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:04 np0005481680 nova_compute[264665]: 2025-10-12 21:24:04.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 952 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 12 17:24:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:05 np0005481680 nova_compute[264665]: 2025-10-12 21:24:05.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:06 np0005481680 podman[272065]: 2025-10-12 21:24:06.130816904 +0000 UTC m=+0.078653612 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:24:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:06 np0005481680 nova_compute[264665]: 2025-10-12 21:24:06.597 2 INFO nova.compute.manager [None req-2e324eea-8ebd-4e1b-91b2-422e2060e2bf 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Get console output#033[00m
Oct 12 17:24:06 np0005481680 nova_compute[264665]: 2025-10-12 21:24:06.605 2 INFO oslo.privsep.daemon [None req-2e324eea-8ebd-4e1b-91b2-422e2060e2bf 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp32df7xy5/privsep.sock']#033[00m
Oct 12 17:24:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 12 17:24:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:07.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:07.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:07 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.364 2 INFO oslo.privsep.daemon [None req-2e324eea-8ebd-4e1b-91b2-422e2060e2bf 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.232 629 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.240 629 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.245 629 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.246 629 INFO oslo.privsep.daemon [-] privsep daemon running as pid 629#033[00m
Oct 12 17:24:07 np0005481680 nova_compute[264665]: 2025-10-12 21:24:07.470 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:24:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 12 17:24:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:09.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:09 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:09 np0005481680 nova_compute[264665]: 2025-10-12 21:24:09.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:10 np0005481680 nova_compute[264665]: 2025-10-12 21:24:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 12 17:24:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:11.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:11 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c0033c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:12] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:24:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:12] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:24:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:24:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:12.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:24:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 107 KiB/s wr, 35 op/s
Oct 12 17:24:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:14.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80003f10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 111 KiB/s wr, 35 op/s
Oct 12 17:24:14 np0005481680 nova_compute[264665]: 2025-10-12 21:24:14.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:15 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:15 np0005481680 nova_compute[264665]: 2025-10-12 21:24:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 12 17:24:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:17.161Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:24:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:17.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:24:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:17.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:17 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa40008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:24:18
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr']
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:24:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:24:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:24:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:18.360 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:24:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:18.361 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:24:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:18.362 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:24:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 12 17:24:19 np0005481680 podman[272107]: 2025-10-12 21:24:19.131589113 +0000 UTC m=+0.086749210 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:24:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:19 np0005481680 podman[272108]: 2025-10-12 21:24:19.222031577 +0000 UTC m=+0.171695683 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:24:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:24:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5741 writes, 25K keys, 5741 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 5741 writes, 5741 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1548 writes, 6798 keys, 1548 commit groups, 1.0 writes per commit group, ingest: 11.23 MB, 0.02 MB/s#012Interval WAL: 1548 writes, 1548 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    130.1      0.30              0.13        14    0.021       0      0       0.0       0.0#012  L6      1/0   11.69 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    158.2    135.3      1.18              0.52        13    0.091     66K   6873       0.0       0.0#012 Sum      1/0   11.69 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.1    126.3    134.2      1.48              0.64        27    0.055     66K   6873       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3    116.3    116.6      0.73              0.29        12    0.061     34K   3069       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    158.2    135.3      1.18              0.52        13    0.091     66K   6873       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.2      0.28              0.13        13    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.11 MB/s write, 0.18 GB read, 0.10 MB/s read, 1.5 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562cd3961350#2 capacity: 304.00 MB usage: 14.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000205 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(806,13.92 MB,4.58053%) FilterBlock(28,199.30 KB,0.0640217%) IndexBlock(28,344.98 KB,0.110822%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 17:24:19 np0005481680 nova_compute[264665]: 2025-10-12 21:24:19.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:20 np0005481680 nova_compute[264665]: 2025-10-12 21:24:20.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa40008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 12 17:24:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:21.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:21 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:22] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:22] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:22 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:22.190 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:24:22 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:22.191 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:24:22 np0005481680 nova_compute[264665]: 2025-10-12 21:24:22.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 12 17:24:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:23.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:23 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:24:23.193 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:24:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:23 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:24:24 np0005481680 nova_compute[264665]: 2025-10-12 21:24:24.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:25 np0005481680 podman[272185]: 2025-10-12 21:24:25.145518292 +0000 UTC m=+0.100559885 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:24:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:25.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:25 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:25 np0005481680 nova_compute[264665]: 2025-10-12 21:24:25.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:24:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:27.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:27 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4002330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:24:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:29 np0005481680 nova_compute[264665]: 2025-10-12 21:24:29.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:30 np0005481680 nova_compute[264665]: 2025-10-12 21:24:30.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:24:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:31 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:32] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:32] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:32 np0005481680 nova_compute[264665]: 2025-10-12 21:24:32.668 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:32 np0005481680 nova_compute[264665]: 2025-10-12 21:24:32.669 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:24:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:33 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:24:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:24:33 np0005481680 nova_compute[264665]: 2025-10-12 21:24:33.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:33 np0005481680 nova_compute[264665]: 2025-10-12 21:24:33.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:34.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.859 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.859 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquired lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.859 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.860 2 DEBUG nova.objects.instance [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 272e54e6-8c70-4d93-838c-b6511e1a9a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:24:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Oct 12 17:24:34 np0005481680 nova_compute[264665]: 2025-10-12 21:24:34.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:35 np0005481680 nova_compute[264665]: 2025-10-12 21:24:35.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:36.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.569 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:24:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.585 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Releasing lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.585 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.586 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.587 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.587 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.587 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.607 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.608 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.608 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.609 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:24:36 np0005481680 nova_compute[264665]: 2025-10-12 21:24:36.609 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:24:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 12 17:24:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:24:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250909162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.079 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:24:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:37.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:37 np0005481680 podman[272238]: 2025-10-12 21:24:37.173824429 +0000 UTC m=+0.137259119 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:24:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.187 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.188 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:24:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:37 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.397 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.400 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4381MB free_disk=59.92181396484375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.400 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.401 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.481 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance 272e54e6-8c70-4d93-838c-b6511e1a9a61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.482 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.483 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:24:37 np0005481680 nova_compute[264665]: 2025-10-12 21:24:37.580 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:24:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:24:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565443644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.077 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.086 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.138 2 ERROR nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [req-071ff2fb-02b8-4ee5-ad41-e945993047e5] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID d63acd5d-c9c0-44fc-813b-0eadb368ddab.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-071ff2fb-02b8-4ee5-ad41-e945993047e5"}]}#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.165 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing inventories for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.186 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating ProviderTree inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.187 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.219 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing aggregate associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.261 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing trait associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.338 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:24:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:24:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426308402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.802 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.810 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.880 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updated inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.880 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.881 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.906 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:24:38 np0005481680 nova_compute[264665]: 2025-10-12 21:24:38.906 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:24:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 12 17:24:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:39 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:39 np0005481680 nova_compute[264665]: 2025-10-12 21:24:39.905 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:39 np0005481680 nova_compute[264665]: 2025-10-12 21:24:39.906 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:24:40 np0005481680 nova_compute[264665]: 2025-10-12 21:24:40.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:40 np0005481680 nova_compute[264665]: 2025-10-12 21:24:40.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:40.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 12 17:24:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:42] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:42] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 12 17:24:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:43 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:44.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 188 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 12 17:24:45 np0005481680 nova_compute[264665]: 2025-10-12 21:24:45.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:45 np0005481680 nova_compute[264665]: 2025-10-12 21:24:45.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:46.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 188 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Oct 12 17:24:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:47.171Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:24:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:47.172Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:24:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:47.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:47.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:47 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2848516749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:24:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2848516749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:24:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:48.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 188 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Oct 12 17:24:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:24:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:24:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212449 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:24:50 np0005481680 nova_compute[264665]: 2025-10-12 21:24:50.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:50 np0005481680 podman[272346]: 2025-10-12 21:24:50.135464204 +0000 UTC m=+0.094311526 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 12 17:24:50 np0005481680 podman[272347]: 2025-10-12 21:24:50.175239655 +0000 UTC m=+0.132842225 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 12 17:24:50 np0005481680 nova_compute[264665]: 2025-10-12 21:24:50.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:50.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 12 17:24:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:51 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:52] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:24:52] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:24:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 200 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 12 17:24:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:53.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:53 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc002440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 12 17:24:55 np0005481680 nova_compute[264665]: 2025-10-12 21:24:55.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:55 np0005481680 nova_compute[264665]: 2025-10-12 21:24:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:56 np0005481680 podman[272401]: 2025-10-12 21:24:56.144280918 +0000 UTC m=+0.098831390 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:24:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:24:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 119 KiB/s wr, 47 op/s
Oct 12 17:24:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:24:57.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:24:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:57 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc002440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:24:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:24:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:24:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 119 KiB/s wr, 47 op/s
Oct 12 17:24:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:24:59Z|00034|binding|INFO|Releasing lport fc66f074-81fa-4e66-9e8f-de55158f2451 from this chassis (sb_readonly=0)
Oct 12 17:24:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:24:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:24:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:24:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:24:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004500 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.927 2 DEBUG nova.compute.manager [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.928 2 DEBUG nova.compute.manager [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing instance network info cache due to event network-changed-7087c316-8bc6-4ae4-a39d-10fad6139d2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.928 2 DEBUG oslo_concurrency.lockutils [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.929 2 DEBUG oslo_concurrency.lockutils [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:24:59 np0005481680 nova_compute[264665]: 2025-10-12 21:24:59.929 2 DEBUG nova.network.neutron [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Refreshing network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:24:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:24:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.023 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.024 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.024 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.025 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.025 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.028 2 INFO nova.compute.manager [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Terminating instance#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.030 2 DEBUG nova.compute.manager [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 kernel: tap7087c316-8b (unregistering): left promiscuous mode
Oct 12 17:25:00 np0005481680 NetworkManager[44859]: <info>  [1760304300.1321] device (tap7087c316-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:00Z|00035|binding|INFO|Releasing lport 7087c316-8bc6-4ae4-a39d-10fad6139d2b from this chassis (sb_readonly=0)
Oct 12 17:25:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:00Z|00036|binding|INFO|Setting lport 7087c316-8bc6-4ae4-a39d-10fad6139d2b down in Southbound
Oct 12 17:25:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:00Z|00037|binding|INFO|Removing iface tap7087c316-8b ovn-installed in OVS
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.158 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:aa:84 10.100.0.9'], port_security=['fa:16:3e:17:aa:84 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '272e54e6-8c70-4d93-838c-b6511e1a9a61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb8e0c26-7a4c-492b-92e7-613512ada910', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45ae961f-5a05-4a7d-be11-726aef1ceda0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=218bc91e-511f-4a31-8fe3-010bc033ff95, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=7087c316-8bc6-4ae4-a39d-10fad6139d2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.160 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 7087c316-8bc6-4ae4-a39d-10fad6139d2b in datapath eb8e0c26-7a4c-492b-92e7-613512ada910 unbound from our chassis#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.162 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb8e0c26-7a4c-492b-92e7-613512ada910, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.163 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3da04e-130b-438a-8b35-6fb930a6716a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.164 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910 namespace which is not needed anymore#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 12 17:25:00 np0005481680 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.877s CPU time.
Oct 12 17:25:00 np0005481680 systemd-machined[218338]: Machine qemu-1-instance-00000001 terminated.
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.275 2 INFO nova.virt.libvirt.driver [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Instance destroyed successfully.#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.276 2 DEBUG nova.objects.instance [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid 272e54e6-8c70-4d93-838c-b6511e1a9a61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.293 2 DEBUG nova.virt.libvirt.vif [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:23:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1130035828',display_name='tempest-TestNetworkBasicOps-server-1130035828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1130035828',id=1,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFaoxoysQrc+7voGJH9+95zvBEIx8T8j27vK54pA8C5IkKm6egwZlxQ/RFTI5+QcGyvz5wcpnBScK+cserfjr2xL4tIWlrufZ6VInpDPrirN0ndQueVA6v2+Zc1DF6Zdeg==',key_name='tempest-TestNetworkBasicOps-1601546160',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:23:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-ddym0l0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:23:48Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=272e54e6-8c70-4d93-838c-b6511e1a9a61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.293 2 DEBUG nova.network.os_vif_util [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.294 2 DEBUG nova.network.os_vif_util [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.295 2 DEBUG os_vif [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7087c316-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.307 2 INFO os_vif [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:aa:84,bridge_name='br-int',has_traffic_filtering=True,id=7087c316-8bc6-4ae4-a39d-10fad6139d2b,network=Network(eb8e0c26-7a4c-492b-92e7-613512ada910),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7087c316-8b')#033[00m
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [NOTICE]   (271302) : haproxy version is 2.8.14-c23fe91
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [NOTICE]   (271302) : path to executable is /usr/sbin/haproxy
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [WARNING]  (271302) : Exiting Master process...
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [WARNING]  (271302) : Exiting Master process...
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [ALERT]    (271302) : Current worker (271304) exited with code 143 (Terminated)
Oct 12 17:25:00 np0005481680 neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910[271298]: [WARNING]  (271302) : All workers exited. Exiting... (0)
Oct 12 17:25:00 np0005481680 systemd[1]: libpod-08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7.scope: Deactivated successfully.
Oct 12 17:25:00 np0005481680 podman[272467]: 2025-10-12 21:25:00.412216622 +0000 UTC m=+0.067533568 container died 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:25:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8872de8395f5ba3cab9d653d9fce7d31010ca6f5e68c4da037bb492a87a83153-merged.mount: Deactivated successfully.
Oct 12 17:25:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7-userdata-shm.mount: Deactivated successfully.
Oct 12 17:25:00 np0005481680 podman[272467]: 2025-10-12 21:25:00.471342631 +0000 UTC m=+0.126659577 container cleanup 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:25:00 np0005481680 systemd[1]: libpod-conmon-08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7.scope: Deactivated successfully.
Oct 12 17:25:00 np0005481680 podman[272508]: 2025-10-12 21:25:00.575806594 +0000 UTC m=+0.069491316 container remove 08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:25:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.589 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9175af-bbc3-4bab-8f33-1b94e6ff3b8a]: (4, ('Sun Oct 12 09:25:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910 (08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7)\n08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7\nSun Oct 12 09:25:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910 (08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7)\n08c3ac2c912712319d724a4a8cb3b9941dc6016a9e257214d9f7e95af03c1db7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.592 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[858b9b57-9f1d-418e-a38e-16714401e749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.593 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb8e0c26-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:00 np0005481680 kernel: tapeb8e0c26-70: left promiscuous mode
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.631 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[514c1a43-a498-43a9-9e12-770e72a79a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.659 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f526b1e2-672c-45c3-9a8a-50356b87c94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.662 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c512ab-d1b5-410c-a3e4-f6db8baffe73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.689 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf8f889-c53f-43c6-8070-84a585dbce5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389840, 'reachable_time': 19105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272524, 'error': None, 'target': 'ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.706 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb8e0c26-7a4c-492b-92e7-613512ada910 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:25:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:00.707 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[014bfaec-5699-45e4-8a4f-bcb826718e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:00 np0005481680 systemd[1]: run-netns-ovnmeta\x2deb8e0c26\x2d7a4c\x2d492b\x2d92e7\x2d613512ada910.mount: Deactivated successfully.
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.843 2 INFO nova.virt.libvirt.driver [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Deleting instance files /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61_del#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.844 2 INFO nova.virt.libvirt.driver [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Deletion of /var/lib/nova/instances/272e54e6-8c70-4d93-838c-b6511e1a9a61_del complete#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.904 2 DEBUG nova.virt.libvirt.host [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.904 2 INFO nova.virt.libvirt.host [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] UEFI support detected#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.907 2 INFO nova.compute.manager [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.908 2 DEBUG oslo.service.loopingcall [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.909 2 DEBUG nova.compute.manager [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:25:00 np0005481680 nova_compute[264665]: 2025-10-12 21:25:00.909 2 DEBUG nova.network.neutron [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:25:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 119 KiB/s wr, 48 op/s
Oct 12 17:25:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:01 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.628 2 DEBUG nova.network.neutron [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updated VIF entry in instance network info cache for port 7087c316-8bc6-4ae4-a39d-10fad6139d2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.629 2 DEBUG nova.network.neutron [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [{"id": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "address": "fa:16:3e:17:aa:84", "network": {"id": "eb8e0c26-7a4c-492b-92e7-613512ada910", "bridge": "br-int", "label": "tempest-network-smoke--1973050096", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7087c316-8b", "ovs_interfaceid": "7087c316-8bc6-4ae4-a39d-10fad6139d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.663 2 DEBUG oslo_concurrency.lockutils [req-44f8f964-d252-46f5-a22d-34a818ac72ab req-7a70af72-d1cc-454e-889a-2c8ffe8c6723 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-272e54e6-8c70-4d93-838c-b6511e1a9a61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.858 2 DEBUG nova.network.neutron [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.884 2 INFO nova.compute.manager [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Took 0.97 seconds to deallocate network for instance.#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.936 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.937 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.951 2 DEBUG nova.compute.manager [req-f15387e4-9784-460e-86c3-d8240da80b66 req-ef55bda1-e6a3-4808-98a2-82f89250b110 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-vif-deleted-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:01 np0005481680 nova_compute[264665]: 2025-10-12 21:25:01.995 2 DEBUG oslo_concurrency.processutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:02] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:25:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:02] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.058 2 DEBUG nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-vif-unplugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.061 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.061 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.062 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.062 2 DEBUG nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] No waiting events found dispatching network-vif-unplugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.062 2 WARNING nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received unexpected event network-vif-unplugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b for instance with vm_state deleted and task_state None.#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.062 2 DEBUG nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.062 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.063 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.063 2 DEBUG oslo_concurrency.lockutils [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.063 2 DEBUG nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] No waiting events found dispatching network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.063 2 WARNING nova.compute.manager [req-b9e4d016-f1ce-48f9-8729-61a7c6ebdb9a req-153f5f2c-087f-49bc-8b4a-8fe8763c7579 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Received unexpected event network-vif-plugged-7087c316-8bc6-4ae4-a39d-10fad6139d2b for instance with vm_state deleted and task_state None.#033[00m
Oct 12 17:25:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:25:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936369425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.479 2 DEBUG oslo_concurrency.processutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.486 2 DEBUG nova.compute.provider_tree [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.504 2 DEBUG nova.scheduler.client.report [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.528 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.576 2 INFO nova.scheduler.client.report [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance 272e54e6-8c70-4d93-838c-b6511e1a9a61#033[00m
Oct 12 17:25:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:02 np0005481680 nova_compute[264665]: 2025-10-12 21:25:02.659 2 DEBUG oslo_concurrency.lockutils [None req-73686615-dc95-48bb-8e1a-6cce04d0d5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "272e54e6-8c70-4d93-838c-b6511e1a9a61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:25:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:25:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 121 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 12 17:25:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:25:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:25:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:25:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:03 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:03 np0005481680 podman[272752]: 2025-10-12 21:25:03.976307587 +0000 UTC m=+0.049040092 container create dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:25:04 np0005481680 systemd[1]: Started libpod-conmon-dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99.scope.
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:03.954564328 +0000 UTC m=+0.027296853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:04.081143321 +0000 UTC m=+0.153875856 container init dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:04.095412037 +0000 UTC m=+0.168144542 container start dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:04.101626157 +0000 UTC m=+0.174358682 container attach dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:25:04 np0005481680 great_chebyshev[272768]: 167 167
Oct 12 17:25:04 np0005481680 systemd[1]: libpod-dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99.scope: Deactivated successfully.
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:04.104900661 +0000 UTC m=+0.177633226 container died dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:25:04 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a84d45486239dac5f89cb4a3b6a47556b5bbd602b5fff41eb9f86ca3d7163f24-merged.mount: Deactivated successfully.
Oct 12 17:25:04 np0005481680 podman[272752]: 2025-10-12 21:25:04.164333998 +0000 UTC m=+0.237066513 container remove dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chebyshev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:25:04 np0005481680 systemd[1]: libpod-conmon-dcd1e14798676f973c3809b81d31e5553433f7892ebac90dbd1d6d7c029c7d99.scope: Deactivated successfully.
Oct 12 17:25:04 np0005481680 podman[272793]: 2025-10-12 21:25:04.433614378 +0000 UTC m=+0.082191443 container create 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:25:04 np0005481680 systemd[1]: Started libpod-conmon-424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9.scope.
Oct 12 17:25:04 np0005481680 podman[272793]: 2025-10-12 21:25:04.402916539 +0000 UTC m=+0.051493654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:04 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:04 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:04 np0005481680 podman[272793]: 2025-10-12 21:25:04.554206687 +0000 UTC m=+0.202783802 container init 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:25:04 np0005481680 podman[272793]: 2025-10-12 21:25:04.569649704 +0000 UTC m=+0.218226759 container start 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:04 np0005481680 podman[272793]: 2025-10-12 21:25:04.574299313 +0000 UTC m=+0.222876378 container attach 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:04.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:04 np0005481680 ecstatic_allen[272809]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:25:04 np0005481680 ecstatic_allen[272809]: --> All data devices are unavailable
Oct 12 17:25:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 18 KiB/s wr, 59 op/s
Oct 12 17:25:05 np0005481680 systemd[1]: libpod-424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9.scope: Deactivated successfully.
Oct 12 17:25:05 np0005481680 podman[272824]: 2025-10-12 21:25:05.059946593 +0000 UTC m=+0.036431097 container died 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:25:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-516da36057d2aae9c65e5a75684acd3aa82e7f94ed71fc9b94c8ee0931519cff-merged.mount: Deactivated successfully.
Oct 12 17:25:05 np0005481680 podman[272824]: 2025-10-12 21:25:05.118728943 +0000 UTC m=+0.095213427 container remove 424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:05 np0005481680 nova_compute[264665]: 2025-10-12 21:25:05.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:05 np0005481680 systemd[1]: libpod-conmon-424340d8f77f0ca32ddc178d08a43b9dfe71fb664d5d3618985a28182ef6bbc9.scope: Deactivated successfully.
Oct 12 17:25:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:05.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:05 np0005481680 nova_compute[264665]: 2025-10-12 21:25:05.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:05 np0005481680 podman[272933]: 2025-10-12 21:25:05.918269809 +0000 UTC m=+0.067224799 container create 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:25:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:25:05 np0005481680 systemd[1]: Started libpod-conmon-8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644.scope.
Oct 12 17:25:05 np0005481680 podman[272933]: 2025-10-12 21:25:05.892537168 +0000 UTC m=+0.041492198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:06 np0005481680 podman[272933]: 2025-10-12 21:25:06.031203391 +0000 UTC m=+0.180158431 container init 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 17:25:06 np0005481680 podman[272933]: 2025-10-12 21:25:06.04326803 +0000 UTC m=+0.192223030 container start 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:25:06 np0005481680 podman[272933]: 2025-10-12 21:25:06.047392107 +0000 UTC m=+0.196347097 container attach 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 17:25:06 np0005481680 zen_cray[272950]: 167 167
Oct 12 17:25:06 np0005481680 systemd[1]: libpod-8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644.scope: Deactivated successfully.
Oct 12 17:25:06 np0005481680 conmon[272950]: conmon 8807f674c947c66253ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644.scope/container/memory.events
Oct 12 17:25:06 np0005481680 podman[272933]: 2025-10-12 21:25:06.053275919 +0000 UTC m=+0.202230909 container died 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:25:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6051a50632ddd469b3a7519a48ad5ef6aaf9ec24e97d2e8f50ac960df9966cce-merged.mount: Deactivated successfully.
Oct 12 17:25:06 np0005481680 podman[272933]: 2025-10-12 21:25:06.113974218 +0000 UTC m=+0.262929208 container remove 8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_cray, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:06 np0005481680 systemd[1]: libpod-conmon-8807f674c947c66253ac96166e9d8a407081fd9d218b0f1e179f9832c8176644.scope: Deactivated successfully.
Oct 12 17:25:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:06 np0005481680 nova_compute[264665]: 2025-10-12 21:25:06.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:06 np0005481680 nova_compute[264665]: 2025-10-12 21:25:06.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.388533543 +0000 UTC m=+0.067900156 container create 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:25:06 np0005481680 systemd[1]: Started libpod-conmon-4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246.scope.
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.363040608 +0000 UTC m=+0.042407281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b73572bb16d9d4892664a8cd862bb26fab7edb7282fab49e8478d656b3de4c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b73572bb16d9d4892664a8cd862bb26fab7edb7282fab49e8478d656b3de4c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b73572bb16d9d4892664a8cd862bb26fab7edb7282fab49e8478d656b3de4c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b73572bb16d9d4892664a8cd862bb26fab7edb7282fab49e8478d656b3de4c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.491113419 +0000 UTC m=+0.170480072 container init 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.503635591 +0000 UTC m=+0.183002214 container start 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.507809808 +0000 UTC m=+0.187176431 container attach 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 17:25:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:25:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:06.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:25:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004560 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]: {
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:    "0": [
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:        {
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "devices": [
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "/dev/loop3"
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            ],
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "lv_name": "ceph_lv0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "lv_size": "21470642176",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "name": "ceph_lv0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "tags": {
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.cluster_name": "ceph",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.crush_device_class": "",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.encrypted": "0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.osd_id": "0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.type": "block",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.vdo": "0",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:                "ceph.with_tpm": "0"
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            },
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "type": "block",
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:            "vg_name": "ceph_vg0"
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:        }
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]:    ]
Oct 12 17:25:06 np0005481680 objective_ramanujan[272992]: }
Oct 12 17:25:06 np0005481680 systemd[1]: libpod-4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246.scope: Deactivated successfully.
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.845741542 +0000 UTC m=+0.525108165 container died 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:25:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6b73572bb16d9d4892664a8cd862bb26fab7edb7282fab49e8478d656b3de4c6-merged.mount: Deactivated successfully.
Oct 12 17:25:06 np0005481680 podman[272974]: 2025-10-12 21:25:06.903133207 +0000 UTC m=+0.582499830 container remove 4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:25:06 np0005481680 systemd[1]: libpod-conmon-4fcd2d8a0bf448982ff4aef62297b093af2951e8ecbf380a06491eac98822246.scope: Deactivated successfully.
Oct 12 17:25:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Oct 12 17:25:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:07.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:25:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:07 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.621613349 +0000 UTC m=+0.056027141 container create 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:07 np0005481680 systemd[1]: Started libpod-conmon-017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d.scope.
Oct 12 17:25:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.600875876 +0000 UTC m=+0.035289668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.714413924 +0000 UTC m=+0.148827766 container init 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.725666782 +0000 UTC m=+0.160080574 container start 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.730487726 +0000 UTC m=+0.164901568 container attach 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:25:07 np0005481680 dazzling_hermann[273125]: 167 167
Oct 12 17:25:07 np0005481680 systemd[1]: libpod-017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d.scope: Deactivated successfully.
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.737112516 +0000 UTC m=+0.171526358 container died 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:25:07 np0005481680 podman[273121]: 2025-10-12 21:25:07.77115259 +0000 UTC m=+0.099867356 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:25:07 np0005481680 systemd[1]: var-lib-containers-storage-overlay-118468e235ebe3c8d63886254ad3126369ee8627b348f5453bf42c5a24b7974e-merged.mount: Deactivated successfully.
Oct 12 17:25:07 np0005481680 podman[273107]: 2025-10-12 21:25:07.793587497 +0000 UTC m=+0.228001289 container remove 017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:25:07 np0005481680 systemd[1]: libpod-conmon-017a76041ae2ba35c76d7a140ebe4d67606c1fa9adbddc66b6bee038cba73f5d.scope: Deactivated successfully.
Oct 12 17:25:08 np0005481680 podman[273168]: 2025-10-12 21:25:08.033922163 +0000 UTC m=+0.078375555 container create 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 17:25:08 np0005481680 systemd[1]: Started libpod-conmon-54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a.scope.
Oct 12 17:25:08 np0005481680 podman[273168]: 2025-10-12 21:25:08.003867471 +0000 UTC m=+0.048320923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:25:08 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da68571f565f75cf25b3ab1a86eadc0f4d05f81a9ede9fa20c377d1536adf53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da68571f565f75cf25b3ab1a86eadc0f4d05f81a9ede9fa20c377d1536adf53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da68571f565f75cf25b3ab1a86eadc0f4d05f81a9ede9fa20c377d1536adf53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da68571f565f75cf25b3ab1a86eadc0f4d05f81a9ede9fa20c377d1536adf53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:08 np0005481680 podman[273168]: 2025-10-12 21:25:08.156868633 +0000 UTC m=+0.201322035 container init 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:25:08 np0005481680 podman[273168]: 2025-10-12 21:25:08.168591763 +0000 UTC m=+0.213045165 container start 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:25:08 np0005481680 podman[273168]: 2025-10-12 21:25:08.17233805 +0000 UTC m=+0.216791462 container attach 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:25:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:08 np0005481680 lvm[273260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:25:08 np0005481680 lvm[273260]: VG ceph_vg0 finished
Oct 12 17:25:08 np0005481680 silly_williams[273185]: {}
Oct 12 17:25:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Oct 12 17:25:09 np0005481680 systemd[1]: libpod-54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a.scope: Deactivated successfully.
Oct 12 17:25:09 np0005481680 podman[273168]: 2025-10-12 21:25:09.005859669 +0000 UTC m=+1.050313071 container died 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:25:09 np0005481680 systemd[1]: libpod-54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a.scope: Consumed 1.429s CPU time.
Oct 12 17:25:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-2da68571f565f75cf25b3ab1a86eadc0f4d05f81a9ede9fa20c377d1536adf53-merged.mount: Deactivated successfully.
Oct 12 17:25:09 np0005481680 podman[273168]: 2025-10-12 21:25:09.073733623 +0000 UTC m=+1.118187015 container remove 54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:25:09 np0005481680 systemd[1]: libpod-conmon-54bc6b1900a3e14be9d99387609489569811fbb1681e7ace1a7daeb42b15fa5a.scope: Deactivated successfully.
Oct 12 17:25:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:25:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:25:09 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:25:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:09.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:25:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:09 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:10 np0005481680 nova_compute[264665]: 2025-10-12 21:25:10.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:25:10 np0005481680 nova_compute[264665]: 2025-10-12 21:25:10.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa98004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:10.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Oct 12 17:25:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:11.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:11 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212511 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:25:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:25:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:25:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980045a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Oct 12 17:25:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Oct 12 17:25:15 np0005481680 nova_compute[264665]: 2025-10-12 21:25:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:15.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:15 np0005481680 nova_compute[264665]: 2025-10-12 21:25:15.272 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304300.2713923, 272e54e6-8c70-4d93-838c-b6511e1a9a61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:25:15 np0005481680 nova_compute[264665]: 2025-10-12 21:25:15.272 2 INFO nova.compute.manager [-] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:25:15 np0005481680 nova_compute[264665]: 2025-10-12 21:25:15.291 2 DEBUG nova.compute.manager [None req-8f829ec0-2167-465e-a7f7-3d4d850ca608 - - - - - -] [instance: 272e54e6-8c70-4d93-838c-b6511e1a9a61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:25:15 np0005481680 nova_compute[264665]: 2025-10-12 21:25:15.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:15 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa980045c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:16.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:25:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:17.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:25:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:17 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:25:18
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', '.mgr', '.nfs', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'images']
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:25:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:25:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:25:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:18.362 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:18.363 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:18.363 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:18.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:25:18 np0005481680 nova_compute[264665]: 2025-10-12 21:25:18.968 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:18 np0005481680 nova_compute[264665]: 2025-10-12 21:25:18.968 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:18 np0005481680 nova_compute[264665]: 2025-10-12 21:25:18.987 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:25:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.105 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.105 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.117 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.118 2 INFO nova.compute.claims [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:25:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.268 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:25:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737339883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.739 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.748 2 DEBUG nova.compute.provider_tree [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.771 2 DEBUG nova.scheduler.client.report [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.800 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.801 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.854 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.855 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.882 2 INFO nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.900 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.987 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.988 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:25:19 np0005481680 nova_compute[264665]: 2025-10-12 21:25:19.989 2 INFO nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Creating image(s)#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.022 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.063 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.100 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.105 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.193 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.194 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.195 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.196 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.235 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.240 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 33651582-07e4-4ebc-8cd7-74903789e983_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.474 2 DEBUG nova.policy [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:25:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:20.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.622 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 33651582-07e4-4ebc-8cd7-74903789e983_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.739 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.901 2 DEBUG nova.objects.instance [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.915 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.916 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Ensure instance console log exists: /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.916 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.917 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:20 np0005481680 nova_compute[264665]: 2025-10-12 21:25:20.918 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 12 17:25:21 np0005481680 podman[273501]: 2025-10-12 21:25:21.152585821 +0000 UTC m=+0.105854262 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:25:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:21 np0005481680 podman[273502]: 2025-10-12 21:25:21.203482288 +0000 UTC m=+0.153767092 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 12 17:25:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:21 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:22] "GET /metrics HTTP/1.1" 200 48376 "" "Prometheus/2.51.0"
Oct 12 17:25:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:22] "GET /metrics HTTP/1.1" 200 48376 "" "Prometheus/2.51.0"
Oct 12 17:25:22 np0005481680 nova_compute[264665]: 2025-10-12 21:25:22.188 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Successfully created port: 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:25:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:22.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:22 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:22 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:22.712 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:25:22 np0005481680 nova_compute[264665]: 2025-10-12 21:25:22.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:22 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:22.714 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:25:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 12 17:25:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:23 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.590 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Successfully updated port: 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:25:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:24.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.618 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.619 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.619 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:25:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:24 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c00c120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.704 2 DEBUG nova.compute.manager [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.705 2 DEBUG nova.compute.manager [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing instance network info cache due to event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.706 2 DEBUG oslo_concurrency.lockutils [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:24 np0005481680 nova_compute[264665]: 2025-10-12 21:25:24.785 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:25:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:25:25 np0005481680 nova_compute[264665]: 2025-10-12 21:25:25.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:25 np0005481680 nova_compute[264665]: 2025-10-12 21:25:25.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:25 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.191 2 DEBUG nova.network.neutron [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.285 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.285 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Instance network_info: |[{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.286 2 DEBUG oslo_concurrency.lockutils [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.286 2 DEBUG nova.network.neutron [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.291 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Start _get_guest_xml network_info=[{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.299 2 WARNING nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.307 2 DEBUG nova.virt.libvirt.host [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.308 2 DEBUG nova.virt.libvirt.host [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.315 2 DEBUG nova.virt.libvirt.host [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.315 2 DEBUG nova.virt.libvirt.host [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.316 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.316 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.317 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.318 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.318 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.318 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.319 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.319 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.320 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.320 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.321 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.321 2 DEBUG nova.virt.hardware [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.325 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:25:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:26.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:25:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:26 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:25:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579477370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.815 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.850 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:26 np0005481680 nova_compute[264665]: 2025-10-12 21:25:26.855 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:25:27 np0005481680 podman[273639]: 2025-10-12 21:25:27.129626931 +0000 UTC m=+0.091826731 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:25:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:27.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:25:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:27.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:25:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:25:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:27.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:25:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:25:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601966691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.304 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.307 2 DEBUG nova.virt.libvirt.vif [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:25:19Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.308 2 DEBUG nova.network.os_vif_util [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.309 2 DEBUG nova.network.os_vif_util [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.311 2 DEBUG nova.objects.instance [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:27 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c00c120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.416 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <uuid>33651582-07e4-4ebc-8cd7-74903789e983</uuid>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <name>instance-00000003</name>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:25:26</nova:creationTime>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="serial">33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="uuid">33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/33651582-07e4-4ebc-8cd7-74903789e983_disk">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/33651582-07e4-4ebc-8cd7-74903789e983_disk.config">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:90:b8:84"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <target dev="tap0c5e7571-52"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log" append="off"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:25:27 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:25:27 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:25:27 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:25:27 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.417 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Preparing to wait for external event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.417 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.418 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.418 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.419 2 DEBUG nova.virt.libvirt.vif [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:25:19Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.420 2 DEBUG nova.network.os_vif_util [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.421 2 DEBUG nova.network.os_vif_util [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.421 2 DEBUG os_vif [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c5e7571-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c5e7571-52, col_values=(('external_ids', {'iface-id': '0c5e7571-52d2-44ba-9b10-914d5d4b6dcb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:b8:84', 'vm-uuid': '33651582-07e4-4ebc-8cd7-74903789e983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:27 np0005481680 NetworkManager[44859]: <info>  [1760304327.4323] manager: (tap0c5e7571-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.443 2 INFO os_vif [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52')#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.511 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.512 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.512 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:90:b8:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.513 2 INFO nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Using config drive#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.549 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.815 2 DEBUG nova.network.neutron [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updated VIF entry in instance network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.816 2 DEBUG nova.network.neutron [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:27 np0005481680 nova_compute[264665]: 2025-10-12 21:25:27.836 2 DEBUG oslo_concurrency.lockutils [req-3ecd2686-d178-47a3-9966-f0401d1f759a req-d0efd547-c200-4053-959c-3e60d667b16d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.587 2 INFO nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Creating config drive at /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config#033[00m
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.592 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqbkw6gc4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:28.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:28 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.731 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqbkw6gc4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.775 2 DEBUG nova.storage.rbd_utils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 33651582-07e4-4ebc-8cd7-74903789e983_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.780 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config 33651582-07e4-4ebc-8cd7-74903789e983_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.994 2 DEBUG oslo_concurrency.processutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config 33651582-07e4-4ebc-8cd7-74903789e983_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:28 np0005481680 nova_compute[264665]: 2025-10-12 21:25:28.995 2 INFO nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Deleting local config drive /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/disk.config because it was imported into RBD.#033[00m
Oct 12 17:25:29 np0005481680 kernel: tap0c5e7571-52: entered promiscuous mode
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:29Z|00038|binding|INFO|Claiming lport 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb for this chassis.
Oct 12 17:25:29 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:29Z|00039|binding|INFO|0c5e7571-52d2-44ba-9b10-914d5d4b6dcb: Claiming fa:16:3e:90:b8:84 10.100.0.11
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.1041] manager: (tap0c5e7571-52): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 systemd-machined[218338]: New machine qemu-2-instance-00000003.
Oct 12 17:25:29 np0005481680 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Oct 12 17:25:29 np0005481680 systemd-udevd[273737]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:25:29 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:29Z|00040|binding|INFO|Setting lport 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb ovn-installed in OVS
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.2218] device (tap0c5e7571-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.2227] device (tap0c5e7571-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:25:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:29.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:29 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:29Z|00041|binding|INFO|Setting lport 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb up in Southbound
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.290 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:b8:84 10.100.0.11'], port_security=['fa:16:3e:90:b8:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '33651582-07e4-4ebc-8cd7-74903789e983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94f6889e-47b5-40e5-a758-6153d625c1cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccfa101b-afca-486c-8c0f-cd96615ea67e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd74f6a0-f3bd-4453-9a4f-5d8ee236e898, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.291 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb in datapath 94f6889e-47b5-40e5-a758-6153d625c1cd bound to our chassis#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.292 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94f6889e-47b5-40e5-a758-6153d625c1cd#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.310 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[95542632-cd72-4b35-82e2-c84536eaa335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.311 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94f6889e-41 in ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.314 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94f6889e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.314 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[21be1940-522b-4e64-94b3-aa8bf255b06b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.315 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[49613127-bc84-4326-98a4-a480a6499e41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.338 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[20bc77bd-764c-4b52-a1db-1694edaf55fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:29 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.370 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[6f55b205-afd1-44aa-b92d-7dd8569825aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.413 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[2939ff4f-2f82-4873-bd36-d15451703057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.424 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bb3c15-7120-446b-95c2-95429109fe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.4251] manager: (tap94f6889e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.479 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[48a21b59-ebfc-4976-bb13-05232b8bb1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.483 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed30428-4f81-4b30-aa60-e8bf0b1f3d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.5186] device (tap94f6889e-40): carrier: link connected
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.527 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[470ea9d1-5175-4641-8bdd-35ca723652fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.553 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aa8c20-c9f8-46ea-9289-2f5494f495a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94f6889e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:83:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399615, 'reachable_time': 25054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273772, 'error': None, 'target': 'ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.576 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c2a614-1203-4bb3-bc81-e89e3dfaaead]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:83fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 399615, 'tstamp': 399615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273773, 'error': None, 'target': 'ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.599 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d44f2b1a-4ad4-470d-bcd1-ed7a4140135d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94f6889e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:83:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399615, 'reachable_time': 25054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273774, 'error': None, 'target': 'ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.651 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2e2646-ec4a-4708-be46-bbf307efe29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.729 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[437254b5-4fbd-46d8-8530-85d5bfa3bbc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.732 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94f6889e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.733 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.734 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94f6889e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 NetworkManager[44859]: <info>  [1760304329.7386] manager: (tap94f6889e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 12 17:25:29 np0005481680 kernel: tap94f6889e-40: entered promiscuous mode
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.744 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94f6889e-40, col_values=(('external_ids', {'iface-id': '44e43b9b-4616-4f52-be04-796d4bf640d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:29Z|00042|binding|INFO|Releasing lport 44e43b9b-4616-4f52-be04-796d4bf640d4 from this chassis (sb_readonly=0)
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.756 2 DEBUG nova.compute.manager [req-ba3448ee-65ec-4b1c-8c0d-17fddb47fb78 req-068b9c6a-bfa5-46cc-ab51-2f7866d30960 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.757 2 DEBUG oslo_concurrency.lockutils [req-ba3448ee-65ec-4b1c-8c0d-17fddb47fb78 req-068b9c6a-bfa5-46cc-ab51-2f7866d30960 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.758 2 DEBUG oslo_concurrency.lockutils [req-ba3448ee-65ec-4b1c-8c0d-17fddb47fb78 req-068b9c6a-bfa5-46cc-ab51-2f7866d30960 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.759 2 DEBUG oslo_concurrency.lockutils [req-ba3448ee-65ec-4b1c-8c0d-17fddb47fb78 req-068b9c6a-bfa5-46cc-ab51-2f7866d30960 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.759 2 DEBUG nova.compute.manager [req-ba3448ee-65ec-4b1c-8c0d-17fddb47fb78 req-068b9c6a-bfa5-46cc-ab51-2f7866d30960 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Processing event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:25:29 np0005481680 nova_compute[264665]: 2025-10-12 21:25:29.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.778 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94f6889e-47b5-40e5-a758-6153d625c1cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94f6889e-47b5-40e5-a758-6153d625c1cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.779 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9494d1-e4a7-4f39-9662-3c4ea6bffde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.780 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-94f6889e-47b5-40e5-a758-6153d625c1cd
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/94f6889e-47b5-40e5-a758-6153d625c1cd.pid.haproxy
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID 94f6889e-47b5-40e5-a758-6153d625c1cd
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:25:29 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:29.781 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd', 'env', 'PROCESS_TAG=haproxy-94f6889e-47b5-40e5-a758-6153d625c1cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94f6889e-47b5-40e5-a758-6153d625c1cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:25:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212529 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:25:30 np0005481680 podman[273850]: 2025-10-12 21:25:30.244636266 +0000 UTC m=+0.072538955 container create 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:30 np0005481680 podman[273850]: 2025-10-12 21:25:30.205265684 +0000 UTC m=+0.033168423 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:25:30 np0005481680 systemd[1]: Started libpod-conmon-0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0.scope.
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.306 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.307 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304330.3058624, 33651582-07e4-4ebc-8cd7-74903789e983 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.307 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] VM Started (Lifecycle Event)#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.312 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.316 2 INFO nova.virt.libvirt.driver [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Instance spawned successfully.#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.316 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:25:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:25:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebf4d276202073613beb8037d4f3f69ddbca5c62d4c7a0310ce0ab4fdf28f72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:25:30 np0005481680 podman[273850]: 2025-10-12 21:25:30.359774505 +0000 UTC m=+0.187677164 container init 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:25:30 np0005481680 podman[273850]: 2025-10-12 21:25:30.366914949 +0000 UTC m=+0.194817608 container start 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.386 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:25:30 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [NOTICE]   (273870) : New worker (273872) forked
Oct 12 17:25:30 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [NOTICE]   (273870) : Loading success.
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.391 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.391 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.392 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.393 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.396 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.397 2 DEBUG nova.virt.libvirt.driver [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.406 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.442 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.443 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304330.3062406, 33651582-07e4-4ebc-8cd7-74903789e983 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.443 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.470 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.474 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304330.3106592, 33651582-07e4-4ebc-8cd7-74903789e983 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.475 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.491 2 INFO nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Took 10.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.492 2 DEBUG nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.503 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.508 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.534 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.555 2 INFO nova.compute.manager [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Took 11.50 seconds to build instance.#033[00m
Oct 12 17:25:30 np0005481680 nova_compute[264665]: 2025-10-12 21:25:30.589 2 DEBUG oslo_concurrency.lockutils [None req-da9be2f4-ba98-4394-a1b8-9e16ebbc9ff0 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c00c120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:30.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:30 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c00c120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 12 17:25:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:31.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:31 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.907 2 DEBUG nova.compute.manager [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.907 2 DEBUG oslo_concurrency.lockutils [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.908 2 DEBUG oslo_concurrency.lockutils [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.908 2 DEBUG oslo_concurrency.lockutils [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.908 2 DEBUG nova.compute.manager [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:25:31 np0005481680 nova_compute[264665]: 2025-10-12 21:25:31.909 2 WARNING nova.compute.manager [req-64dcdd44-dbcb-465c-bcf7-6d6cf0963a99 req-363690bf-1822-4b72-be88-4d3f6fe89575 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb for instance with vm_state active and task_state None.#033[00m
Oct 12 17:25:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:32] "GET /metrics HTTP/1.1" 200 48376 "" "Prometheus/2.51.0"
Oct 12 17:25:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:32] "GET /metrics HTTP/1.1" 200 48376 "" "Prometheus/2.51.0"
Oct 12 17:25:32 np0005481680 nova_compute[264665]: 2025-10-12 21:25:32.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:32 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:32 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:32.717 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 12 17:25:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:25:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:25:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:33 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:34 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.665 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:34 np0005481680 NetworkManager[44859]: <info>  [1760304334.9279] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 12 17:25:34 np0005481680 NetworkManager[44859]: <info>  [1760304334.9293] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 12 17:25:34 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:34Z|00043|binding|INFO|Releasing lport 44e43b9b-4616-4f52-be04-796d4bf640d4 from this chassis (sb_readonly=0)
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:34 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:34Z|00044|binding|INFO|Releasing lport 44e43b9b-4616-4f52-be04-796d4bf640d4 from this chassis (sb_readonly=0)
Oct 12 17:25:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 12 17:25:34 np0005481680 nova_compute[264665]: 2025-10-12 21:25:34.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.199 2 DEBUG nova.compute.manager [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.199 2 DEBUG nova.compute.manager [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing instance network info cache due to event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.200 2 DEBUG oslo_concurrency.lockutils [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.200 2 DEBUG oslo_concurrency.lockutils [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.201 2 DEBUG nova.network.neutron [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:25:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:35.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:35 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:35 np0005481680 nova_compute[264665]: 2025-10-12 21:25:35.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:36 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:36 np0005481680 nova_compute[264665]: 2025-10-12 21:25:36.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:36 np0005481680 nova_compute[264665]: 2025-10-12 21:25:36.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:36 np0005481680 nova_compute[264665]: 2025-10-12 21:25:36.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:25:36 np0005481680 nova_compute[264665]: 2025-10-12 21:25:36.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:25:36 np0005481680 nova_compute[264665]: 2025-10-12 21:25:36.887 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.067 2 DEBUG nova.network.neutron [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updated VIF entry in instance network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.068 2 DEBUG nova.network.neutron [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.085 2 DEBUG oslo_concurrency.lockutils [req-403d6196-88b3-458a-aa67-d019848db373 req-201212f5-d920-482c-a56c-bf40bef787f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.086 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.087 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.087 2 DEBUG nova.objects.instance [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:37.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:25:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:37.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:25:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:37.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:37 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:37 np0005481680 nova_compute[264665]: 2025-10-12 21:25:37.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:38 np0005481680 podman[273891]: 2025-10-12 21:25:38.142004013 +0000 UTC m=+0.092817186 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 12 17:25:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:38 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:25:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:39 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:25:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:39.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:39 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.451 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.490 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.491 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.492 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.492 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.493 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.518 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.518 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.519 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.520 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:25:40 np0005481680 nova_compute[264665]: 2025-10-12 21:25:40.520 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:40.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:40 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:25:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594915729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:25:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.013 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.111 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.112 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:25:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:41.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:41 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.438 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.440 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4443MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.440 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.441 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.575 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance 33651582-07e4-4ebc-8cd7-74903789e983 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.576 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.577 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:25:41 np0005481680 nova_compute[264665]: 2025-10-12 21:25:41.611 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:25:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:42] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:25:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:42] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:25:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:25:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:25:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:25:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548601257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.209 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.220 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.239 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.280 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.280 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:42 np0005481680 nova_compute[264665]: 2025-10-12 21:25:42.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:42 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:42Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:b8:84 10.100.0.11
Oct 12 17:25:42 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:42Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:b8:84 10.100.0.11
Oct 12 17:25:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:42.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:42 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 70 op/s
Oct 12 17:25:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:43.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:43 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:44.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:44 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 12 17:25:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:25:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:45.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:45 np0005481680 nova_compute[264665]: 2025-10-12 21:25:45.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:45 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:46 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:25:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:47.185Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:25:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:47.185Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:25:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:47.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:25:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:47.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:47 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:47 np0005481680 nova_compute[264665]: 2025-10-12 21:25:47.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:25:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:25:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:25:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:48 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:25:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:49 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:49 np0005481680 nova_compute[264665]: 2025-10-12 21:25:49.827 2 INFO nova.compute.manager [None req-fa670662-c457-4608-a5a4-8d0c2337e94b 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Get console output#033[00m
Oct 12 17:25:49 np0005481680 nova_compute[264665]: 2025-10-12 21:25:49.836 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:25:50 np0005481680 nova_compute[264665]: 2025-10-12 21:25:50.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:50.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:50 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 12 17:25:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:51 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212551 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:25:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:52] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:25:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:25:52] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:25:52 np0005481680 podman[273994]: 2025-10-12 21:25:52.145348492 +0000 UTC m=+0.099252601 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 12 17:25:52 np0005481680 podman[273995]: 2025-10-12 21:25:52.200724556 +0000 UTC m=+0.150945220 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 12 17:25:52 np0005481680 nova_compute[264665]: 2025-10-12 21:25:52.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:52 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:25:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8910 writes, 33K keys, 8910 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8910 writes, 2141 syncs, 4.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1180 writes, 2991 keys, 1180 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s#012Interval WAL: 1180 writes, 538 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:25:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 12 17:25:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.339 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.340 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.340 2 DEBUG nova.objects.instance [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'flavor' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:53 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.813 2 DEBUG nova.objects.instance [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_requests' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.828 2 DEBUG nova.network.neutron [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:25:53 np0005481680 nova_compute[264665]: 2025-10-12 21:25:53.997 2 DEBUG nova.policy [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:25:54 np0005481680 nova_compute[264665]: 2025-10-12 21:25:54.469 2 DEBUG nova.network.neutron [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Successfully created port: 4957103a-6a21-4535-9c0e-541b9fd3326d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:25:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:54 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.184 2 DEBUG nova.network.neutron [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Successfully updated port: 4957103a-6a21-4535-9c0e-541b9fd3326d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.202 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.203 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.203 2 DEBUG nova.network.neutron [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.293 2 DEBUG nova.compute.manager [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-changed-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.294 2 DEBUG nova.compute.manager [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing instance network info cache due to event network-changed-4957103a-6a21-4535-9c0e-541b9fd3326d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.295 2 DEBUG oslo_concurrency.lockutils [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:25:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:55 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:55 np0005481680 nova_compute[264665]: 2025-10-12 21:25:55.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:25:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:25:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:56.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:25:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:56 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 12 KiB/s wr, 1 op/s
Oct 12 17:25:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:25:57.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:25:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:57.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:57 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa84001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:57 np0005481680 nova_compute[264665]: 2025-10-12 21:25:57.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:58 np0005481680 podman[274044]: 2025-10-12 21:25:58.16357698 +0000 UTC m=+0.117973013 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 12 17:25:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:25:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:58 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 12 KiB/s wr, 1 op/s
Oct 12 17:25:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:25:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:25:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:25:59.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:25:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:25:59 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.464 2 DEBUG nova.network.neutron [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.490 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.491 2 DEBUG oslo_concurrency.lockutils [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.491 2 DEBUG nova.network.neutron [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing network info cache for port 4957103a-6a21-4535-9c0e-541b9fd3326d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.496 2 DEBUG nova.virt.libvirt.vif [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.497 2 DEBUG nova.network.os_vif_util [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.498 2 DEBUG nova.network.os_vif_util [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.499 2 DEBUG os_vif [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4957103a-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4957103a-6a, col_values=(('external_ids', {'iface-id': '4957103a-6a21-4535-9c0e-541b9fd3326d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:8d:7d', 'vm-uuid': '33651582-07e4-4ebc-8cd7-74903789e983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.5106] manager: (tap4957103a-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.518 2 INFO os_vif [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a')#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.520 2 DEBUG nova.virt.libvirt.vif [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.520 2 DEBUG nova.network.os_vif_util [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.521 2 DEBUG nova.network.os_vif_util [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.525 2 DEBUG nova.virt.libvirt.guest [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] attach device xml: <interface type="ethernet">
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <mac address="fa:16:3e:30:8d:7d"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <model type="virtio"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <mtu size="1442"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <target dev="tap4957103a-6a"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]: </interface>
Oct 12 17:25:59 np0005481680 nova_compute[264665]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 12 17:25:59 np0005481680 kernel: tap4957103a-6a: entered promiscuous mode
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.5442] manager: (tap4957103a-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 12 17:25:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:59Z|00045|binding|INFO|Claiming lport 4957103a-6a21-4535-9c0e-541b9fd3326d for this chassis.
Oct 12 17:25:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:59Z|00046|binding|INFO|4957103a-6a21-4535-9c0e-541b9fd3326d: Claiming fa:16:3e:30:8d:7d 10.100.0.22
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.556 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:8d:7d 10.100.0.22'], port_security=['fa:16:3e:30:8d:7d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '33651582-07e4-4ebc-8cd7-74903789e983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44aed212-836a-4e2f-8b2a-57d636f542a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45c1af83-66cf-4f12-b9f3-589fae4453b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edb4ea0-1835-4a0d-84c0-d448b049b26b, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=4957103a-6a21-4535-9c0e-541b9fd3326d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.558 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 4957103a-6a21-4535-9c0e-541b9fd3326d in datapath 44aed212-836a-4e2f-8b2a-57d636f542a7 bound to our chassis#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.560 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44aed212-836a-4e2f-8b2a-57d636f542a7#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.579 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbc6d79-a81f-44b3-bc2c-33b7e778f203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.580 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44aed212-81 in ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.582 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44aed212-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.582 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[62378d86-6abe-48a7-baa1-b28ff36cad58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.583 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[b12d055f-8ba7-44d8-9d7d-64f55e70a5b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.604 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea7a4e9-fed4-4575-b844-13d11a1c3fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 systemd-udevd[274074]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:59Z|00047|binding|INFO|Setting lport 4957103a-6a21-4535-9c0e-541b9fd3326d ovn-installed in OVS
Oct 12 17:25:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:59Z|00048|binding|INFO|Setting lport 4957103a-6a21-4535-9c0e-541b9fd3326d up in Southbound
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.628 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[3546ee86-e3d2-4d96-b18e-4acf83104a64]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.6404] device (tap4957103a-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.6417] device (tap4957103a-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.678 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c0d57e-6481-4a69-8c95-766ce6b358e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.688 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdc60a1-cd4d-4f0d-97a9-814057b663fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.6897] manager: (tap44aed212-80): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 12 17:25:59 np0005481680 systemd-udevd[274078]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.709 2 DEBUG nova.virt.libvirt.driver [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.709 2 DEBUG nova.virt.libvirt.driver [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.709 2 DEBUG nova.virt.libvirt.driver [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:90:b8:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.710 2 DEBUG nova.virt.libvirt.driver [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:30:8d:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.733 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[c505012e-a535-4edb-bbb1-0dafa2495c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.738 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc83dd6-d1b8-4c5d-a4dd-a2dc81aedb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.738 2 DEBUG nova.virt.libvirt.guest [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:25:59</nova:creationTime>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:25:59 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    <nova:port uuid="4957103a-6a21-4535-9c0e-541b9fd3326d">
Oct 12 17:25:59 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:25:59 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:25:59 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:25:59 np0005481680 nova_compute[264665]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.7624] device (tap44aed212-80): carrier: link connected
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.766 2 DEBUG oslo_concurrency.lockutils [None req-f73e406a-fbb0-4daf-a4a8-09c205c23021 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.767 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8657d2-721a-4001-9df8-8835d7ce58c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.795 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6102bb-8b2a-4a8d-88ef-e985fa12b042]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44aed212-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:af:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402639, 'reachable_time': 35395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274102, 'error': None, 'target': 'ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.814 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[dad59142-4718-433a-b07b-aa952f66ffb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:af37'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402639, 'tstamp': 402639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274103, 'error': None, 'target': 'ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.837 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d48feb52-c1a3-4467-a8a9-98f489c85b48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44aed212-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:af:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402639, 'reachable_time': 35395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274104, 'error': None, 'target': 'ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.882 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[84c4c5a2-24ec-4b3c-a6ca-7e16cad8c401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.883 2 DEBUG nova.compute.manager [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.884 2 DEBUG oslo_concurrency.lockutils [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.884 2 DEBUG oslo_concurrency.lockutils [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.884 2 DEBUG oslo_concurrency.lockutils [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.885 2 DEBUG nova.compute.manager [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.885 2 WARNING nova.compute.manager [req-447a4a7e-80d8-477f-b68c-627e2077a8ce req-307bdf6b-8b55-4648-8843-e47052fefc61 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d for instance with vm_state active and task_state None.#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.960 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[cae441c6-5662-45ff-a7af-b9064b6a37d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.961 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44aed212-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.961 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.962 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44aed212-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 kernel: tap44aed212-80: entered promiscuous mode
Oct 12 17:25:59 np0005481680 NetworkManager[44859]: <info>  [1760304359.9662] manager: (tap44aed212-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.967 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44aed212-80, col_values=(('external_ids', {'iface-id': '587943d8-dfb2-45f9-bf51-2490c7189f85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:25:59 np0005481680 ovn_controller[154617]: 2025-10-12T21:25:59Z|00049|binding|INFO|Releasing lport 587943d8-dfb2-45f9-bf51-2490c7189f85 from this chassis (sb_readonly=0)
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.970 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44aed212-836a-4e2f-8b2a-57d636f542a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44aed212-836a-4e2f-8b2a-57d636f542a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.971 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[54880e0e-020f-4da8-8f1c-4264ba7607be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.972 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-44aed212-836a-4e2f-8b2a-57d636f542a7
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/44aed212-836a-4e2f-8b2a-57d636f542a7.pid.haproxy
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID 44aed212-836a-4e2f-8b2a-57d636f542a7
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:25:59 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:25:59.972 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7', 'env', 'PROCESS_TAG=haproxy-44aed212-836a-4e2f-8b2a-57d636f542a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44aed212-836a-4e2f-8b2a-57d636f542a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:25:59 np0005481680 nova_compute[264665]: 2025-10-12 21:25:59.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:00 np0005481680 nova_compute[264665]: 2025-10-12 21:26:00.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:00.453 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:26:00 np0005481680 podman[274136]: 2025-10-12 21:26:00.454583942 +0000 UTC m=+0.097225720 container create 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:00 np0005481680 podman[274136]: 2025-10-12 21:26:00.397991117 +0000 UTC m=+0.040632965 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:26:00 np0005481680 systemd[1]: Started libpod-conmon-1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e.scope.
Oct 12 17:26:00 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:00 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f7ad4205a56b9dac73249be7dccd80518d4156267adfe587829e1ea448ae98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:00 np0005481680 podman[274136]: 2025-10-12 21:26:00.553802902 +0000 UTC m=+0.196444700 container init 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 12 17:26:00 np0005481680 podman[274136]: 2025-10-12 21:26:00.564346602 +0000 UTC m=+0.206988380 container start 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:00 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [NOTICE]   (274156) : New worker (274158) forked
Oct 12 17:26:00 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [NOTICE]   (274156) : Loading success.
Oct 12 17:26:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:00.638 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:26:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:00 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:00Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:8d:7d 10.100.0.22
Oct 12 17:26:00 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:00Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:8d:7d 10.100.0.22
Oct 12 17:26:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct 12 17:26:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.376 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-4957103a-6a21-4535-9c0e-541b9fd3326d" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.377 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-4957103a-6a21-4535-9c0e-541b9fd3326d" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.396 2 DEBUG nova.objects.instance [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'flavor' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:26:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:01 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.420 2 DEBUG nova.virt.libvirt.vif [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.421 2 DEBUG nova.network.os_vif_util [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.422 2 DEBUG nova.network.os_vif_util [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.428 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.433 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.436 2 DEBUG nova.virt.libvirt.driver [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Attempting to detach device tap4957103a-6a from instance 33651582-07e4-4ebc-8cd7-74903789e983 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.437 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] detach device xml: <interface type="ethernet">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <mac address="fa:16:3e:30:8d:7d"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <model type="virtio"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <mtu size="1442"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <target dev="tap4957103a-6a"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </interface>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.446 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.450 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <name>instance-00000003</name>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <uuid>33651582-07e4-4ebc-8cd7-74903789e983</uuid>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:25:59</nova:creationTime>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:port uuid="4957103a-6a21-4535-9c0e-541b9fd3326d">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <memory unit='KiB'>131072</memory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <vcpu placement='static'>1</vcpu>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <resource>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <partition>/machine</partition>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </resource>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <sysinfo type='smbios'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='manufacturer'>RDO</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='product'>OpenStack Compute</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='serial'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='uuid'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='family'>Virtual Machine</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <boot dev='hd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <smbios mode='sysinfo'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <vmcoreinfo state='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <cpu mode='custom' match='exact' check='full'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <vendor>AMD</vendor>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='x2apic'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc-deadline'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='hypervisor'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc_adjust'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='spec-ctrl'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='stibp'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='arch-capabilities'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='cmp_legacy'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='overflow-recov'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='succor'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='ibrs'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='amd-ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='virt-ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='lbrv'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='tsc-scale'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='vmcb-clean'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='flushbyasid'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pause-filter'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pfthreshold'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='rdctl-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='mds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='gds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='rfds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='xsaves'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svm'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='topoext'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='npt'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='nrip-save'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <clock offset='utc'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='pit' tickpolicy='delay'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='hpet' present='no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_poweroff>destroy</on_poweroff>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_reboot>restart</on_reboot>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_crash>destroy</on_crash>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <disk type='network' device='disk'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk' index='2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='vda' bus='virtio'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='virtio-disk0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <disk type='network' device='cdrom'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk.config' index='1'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='sda' bus='sata'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <readonly/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='sata0-0-0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='0' model='pcie-root'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pcie.0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='1' port='0x10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='2' port='0x11'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='3' port='0x12'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='4' port='0x13'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='5' port='0x14'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='6' port='0x15'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='7' port='0x16'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='8' port='0x17'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.8'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='9' port='0x18'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.9'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='10' port='0x19'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='11' port='0x1a'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.11'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='12' port='0x1b'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.12'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='13' port='0x1c'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.13'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='14' port='0x1d'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.14'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='15' port='0x1e'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.15'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='16' port='0x1f'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.16'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='17' port='0x20'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.17'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='18' port='0x21'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.18'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='19' port='0x22'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.19'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='20' port='0x23'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.20'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='21' port='0x24'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.21'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='22' port='0x25'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.22'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='23' port='0x26'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.23'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='24' port='0x27'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.24'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='25' port='0x28'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.25'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-pci-bridge'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.26'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='usb'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='sata' index='0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='ide'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <interface type='ethernet'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mac address='fa:16:3e:90:b8:84'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='tap0c5e7571-52'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model type='virtio'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='vhost' rx_queue_size='512'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mtu size='1442'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='net0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <interface type='ethernet'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mac address='fa:16:3e:30:8d:7d'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='tap4957103a-6a'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model type='virtio'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='vhost' rx_queue_size='512'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mtu size='1442'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='net1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <serial type='pty'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target type='isa-serial' port='0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <model name='isa-serial'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </target>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <console type='pty' tty='/dev/pts/0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target type='serial' port='0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </console>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='tablet' bus='usb'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='usb' bus='0' port='1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='mouse' bus='ps2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='keyboard' bus='ps2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <listen type='address' address='::0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <audio id='1' type='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model type='virtio' heads='1' primary='yes'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='video0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <watchdog model='itco' action='reset'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='watchdog0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </watchdog>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <memballoon model='virtio'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <stats period='10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='balloon0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <rng model='virtio'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <backend model='random'>/dev/urandom</backend>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='rng0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <label>system_u:system_r:svirt_t:s0:c290,c929</label>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c929</imagelabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <label>+107:+107</label>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <imagelabel>+107:+107</imagelabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.452 2 INFO nova.virt.libvirt.driver [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully detached device tap4957103a-6a from instance 33651582-07e4-4ebc-8cd7-74903789e983 from the persistent domain config.#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.452 2 DEBUG nova.virt.libvirt.driver [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] (1/8): Attempting to detach device tap4957103a-6a with device alias net1 from instance 33651582-07e4-4ebc-8cd7-74903789e983 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.453 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] detach device xml: <interface type="ethernet">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <mac address="fa:16:3e:30:8d:7d"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <model type="virtio"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <mtu size="1442"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <target dev="tap4957103a-6a"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </interface>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 12 17:26:01 np0005481680 kernel: tap4957103a-6a (unregistering): left promiscuous mode
Oct 12 17:26:01 np0005481680 NetworkManager[44859]: <info>  [1760304361.5683] device (tap4957103a-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.579 2 DEBUG nova.virt.libvirt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Received event <DeviceRemovedEvent: 1760304361.5774782, 33651582-07e4-4ebc-8cd7-74903789e983 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 12 17:26:01 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:01Z|00050|binding|INFO|Releasing lport 4957103a-6a21-4535-9c0e-541b9fd3326d from this chassis (sb_readonly=0)
Oct 12 17:26:01 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:01Z|00051|binding|INFO|Setting lport 4957103a-6a21-4535-9c0e-541b9fd3326d down in Southbound
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:01 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:01Z|00052|binding|INFO|Removing iface tap4957103a-6a ovn-installed in OVS
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.621 2 DEBUG nova.virt.libvirt.driver [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Start waiting for the detach event from libvirt for device tap4957103a-6a with device alias net1 for instance 33651582-07e4-4ebc-8cd7-74903789e983 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.622 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:01 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:01.626 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:8d:7d 10.100.0.22'], port_security=['fa:16:3e:30:8d:7d 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '33651582-07e4-4ebc-8cd7-74903789e983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44aed212-836a-4e2f-8b2a-57d636f542a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45c1af83-66cf-4f12-b9f3-589fae4453b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edb4ea0-1835-4a0d-84c0-d448b049b26b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=4957103a-6a21-4535-9c0e-541b9fd3326d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.627 2 DEBUG nova.network.neutron [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updated VIF entry in instance network info cache for port 4957103a-6a21-4535-9c0e-541b9fd3326d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.628 2 DEBUG nova.network.neutron [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:26:01 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:01.628 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 4957103a-6a21-4535-9c0e-541b9fd3326d in datapath 44aed212-836a-4e2f-8b2a-57d636f542a7 unbound from our chassis#033[00m
Oct 12 17:26:01 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:01.630 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44aed212-836a-4e2f-8b2a-57d636f542a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.631 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <name>instance-00000003</name>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <uuid>33651582-07e4-4ebc-8cd7-74903789e983</uuid>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:25:59</nova:creationTime>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:port uuid="4957103a-6a21-4535-9c0e-541b9fd3326d">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <memory unit='KiB'>131072</memory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <vcpu placement='static'>1</vcpu>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <resource>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <partition>/machine</partition>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </resource>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <sysinfo type='smbios'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='manufacturer'>RDO</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='product'>OpenStack Compute</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='serial'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='uuid'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <entry name='family'>Virtual Machine</entry>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <boot dev='hd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <smbios mode='sysinfo'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <vmcoreinfo state='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <cpu mode='custom' match='exact' check='full'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <vendor>AMD</vendor>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='x2apic'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc-deadline'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='hypervisor'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc_adjust'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='spec-ctrl'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='stibp'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='arch-capabilities'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='cmp_legacy'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='overflow-recov'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='succor'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='ibrs'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='amd-ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='virt-ssbd'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='lbrv'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='tsc-scale'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='vmcb-clean'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='flushbyasid'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pause-filter'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pfthreshold'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='rdctl-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='mds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='gds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='rfds-no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='xsaves'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svm'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='require' name='topoext'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='npt'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <feature policy='disable' name='nrip-save'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <clock offset='utc'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='pit' tickpolicy='delay'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <timer name='hpet' present='no'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_poweroff>destroy</on_poweroff>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_reboot>restart</on_reboot>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <on_crash>destroy</on_crash>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <disk type='network' device='disk'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk' index='2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='vda' bus='virtio'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='virtio-disk0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <disk type='network' device='cdrom'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk.config' index='1'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='sda' bus='sata'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <readonly/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='sata0-0-0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='0' model='pcie-root'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pcie.0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='1' port='0x10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='2' port='0x11'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='3' port='0x12'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='4' port='0x13'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='5' port='0x14'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='6' port='0x15'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='7' port='0x16'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='8' port='0x17'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.8'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='9' port='0x18'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.9'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='10' port='0x19'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='11' port='0x1a'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.11'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='12' port='0x1b'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.12'/>
Oct 12 17:26:01 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:01.632 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bd2aee-1c13-47af-ba51-022280e029e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='13' port='0x1c'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.13'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='14' port='0x1d'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.14'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='15' port='0x1e'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.15'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='16' port='0x1f'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.16'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='17' port='0x20'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.17'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='18' port='0x21'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.18'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='19' port='0x22'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.19'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='20' port='0x23'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.20'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='21' port='0x24'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.21'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='22' port='0x25'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.22'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='23' port='0x26'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.23'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 12 17:26:01 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:01.632 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7 namespace which is not needed anymore#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='24' port='0x27'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.24'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target chassis='25' port='0x28'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.25'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model name='pcie-pci-bridge'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='pci.26'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='usb'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <controller type='sata' index='0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='ide'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <interface type='ethernet'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mac address='fa:16:3e:90:b8:84'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target dev='tap0c5e7571-52'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model type='virtio'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <driver name='vhost' rx_queue_size='512'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <mtu size='1442'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='net0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <serial type='pty'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target type='isa-serial' port='0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:        <model name='isa-serial'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      </target>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <console type='pty' tty='/dev/pts/0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <target type='serial' port='0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </console>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='tablet' bus='usb'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='usb' bus='0' port='1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='mouse' bus='ps2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input1'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <input type='keyboard' bus='ps2'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='input2'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <listen type='address' address='::0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <audio id='1' type='none'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <model type='virtio' heads='1' primary='yes'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='video0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <watchdog model='itco' action='reset'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='watchdog0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </watchdog>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <memballoon model='virtio'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <stats period='10'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='balloon0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <rng model='virtio'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <backend model='random'>/dev/urandom</backend>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <alias name='rng0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <label>system_u:system_r:svirt_t:s0:c290,c929</label>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c929</imagelabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <label>+107:+107</label>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <imagelabel>+107:+107</imagelabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.631 2 INFO nova.virt.libvirt.driver [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully detached device tap4957103a-6a from instance 33651582-07e4-4ebc-8cd7-74903789e983 from the live domain config.#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.632 2 DEBUG nova.virt.libvirt.vif [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.632 2 DEBUG nova.network.os_vif_util [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.633 2 DEBUG nova.network.os_vif_util [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.634 2 DEBUG os_vif [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4957103a-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.651 2 DEBUG oslo_concurrency.lockutils [req-34d7bdff-5ce4-4a92-853e-2182c044a600 req-84b9937c-8fab-4a79-8934-815bff0dcc97 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.653 2 INFO os_vif [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a')#033[00m
Oct 12 17:26:01 np0005481680 nova_compute[264665]: 2025-10-12 21:26:01.654 2 DEBUG nova.virt.libvirt.guest [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:26:01</nova:creationTime>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:01 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:01 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:01 np0005481680 nova_compute[264665]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 12 17:26:01 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [NOTICE]   (274156) : haproxy version is 2.8.14-c23fe91
Oct 12 17:26:01 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [NOTICE]   (274156) : path to executable is /usr/sbin/haproxy
Oct 12 17:26:01 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [WARNING]  (274156) : Exiting Master process...
Oct 12 17:26:01 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [ALERT]    (274156) : Current worker (274158) exited with code 143 (Terminated)
Oct 12 17:26:01 np0005481680 neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7[274152]: [WARNING]  (274156) : All workers exited. Exiting... (0)
Oct 12 17:26:01 np0005481680 systemd[1]: libpod-1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e.scope: Deactivated successfully.
Oct 12 17:26:01 np0005481680 podman[274191]: 2025-10-12 21:26:01.837749154 +0000 UTC m=+0.071717184 container died 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:26:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-55f7ad4205a56b9dac73249be7dccd80518d4156267adfe587829e1ea448ae98-merged.mount: Deactivated successfully.
Oct 12 17:26:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e-userdata-shm.mount: Deactivated successfully.
Oct 12 17:26:01 np0005481680 podman[274191]: 2025-10-12 21:26:01.898907485 +0000 UTC m=+0.132875475 container cleanup 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:01 np0005481680 systemd[1]: libpod-conmon-1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e.scope: Deactivated successfully.
Oct 12 17:26:01 np0005481680 podman[274220]: 2025-10-12 21:26:01.996912434 +0000 UTC m=+0.063534214 container remove 1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.000 2 DEBUG nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.000 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.001 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.002 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.002 2 DEBUG nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.003 2 WARNING nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d for instance with vm_state active and task_state None.#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.003 2 DEBUG nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-unplugged-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.004 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.004 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.005 2 DEBUG oslo_concurrency.lockutils [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.005 2 DEBUG nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-unplugged-4957103a-6a21-4535-9c0e-541b9fd3326d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.006 2 WARNING nova.compute.manager [req-12079085-5d06-4267-9614-74e2883f49cf req-ac3f12e0-7aed-49ae-8e08-58736a4c8111 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-unplugged-4957103a-6a21-4535-9c0e-541b9fd3326d for instance with vm_state active and task_state None.#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.008 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cfd035-8eb6-43d3-b757-50be7e64ce64]: (4, ('Sun Oct 12 09:26:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7 (1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e)\n1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e\nSun Oct 12 09:26:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7 (1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e)\n1b41d1f8105fd712620fb60124d1f3ddf1d170b0f1ccb548887bb8b9951d3d2e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.010 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f4825a3c-9a01-49b6-8cb1-820aee521e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:02] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:26:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:02] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.013 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44aed212-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:02 np0005481680 kernel: tap44aed212-80: left promiscuous mode
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.022 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8658ceb4-7785-46a3-9bc0-06f86fdf8817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.077 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[4f21f50a-91f5-413f-b3b0-6987ede3f8a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.078 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[93bb5724-77bd-4289-b096-ef2b51bba845]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.103 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[96dc5dbc-c5f0-448e-a8f0-5ff672ff3fb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402630, 'reachable_time': 34599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274235, 'error': None, 'target': 'ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 systemd[1]: run-netns-ovnmeta\x2d44aed212\x2d836a\x2d4e2f\x2d8b2a\x2d57d636f542a7.mount: Deactivated successfully.
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.110 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44aed212-836a-4e2f-8b2a-57d636f542a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:26:02 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:02.110 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[14dddca0-dbd0-4852-a326-aeaa9fc11346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:02 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa840042f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.761 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.762 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:26:02 np0005481680 nova_compute[264665]: 2025-10-12 21:26:02.762 2 DEBUG nova.network.neutron [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:26:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 2.0 KiB/s wr, 0 op/s
Oct 12 17:26:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:26:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:26:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:03.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:03 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a2d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:03 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:03Z|00053|binding|INFO|Releasing lport 44e43b9b-4616-4f52-be04-796d4bf640d4 from this chassis (sb_readonly=0)
Oct 12 17:26:03 np0005481680 nova_compute[264665]: 2025-10-12 21:26:03.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.084 2 DEBUG nova.compute.manager [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.085 2 DEBUG oslo_concurrency.lockutils [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.085 2 DEBUG oslo_concurrency.lockutils [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.086 2 DEBUG oslo_concurrency.lockutils [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.086 2 DEBUG nova.compute.manager [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.087 2 WARNING nova.compute.manager [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-plugged-4957103a-6a21-4535-9c0e-541b9fd3326d for instance with vm_state active and task_state None.#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.087 2 DEBUG nova.compute.manager [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-deleted-4957103a-6a21-4535-9c0e-541b9fd3326d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.087 2 INFO nova.compute.manager [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Neutron deleted interface 4957103a-6a21-4535-9c0e-541b9fd3326d; detaching it from the instance and deleting it from the info cache#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.088 2 DEBUG nova.network.neutron [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.124 2 DEBUG nova.objects.instance [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lazy-loading 'system_metadata' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.167 2 DEBUG nova.objects.instance [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lazy-loading 'flavor' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.213 2 DEBUG nova.virt.libvirt.vif [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.214 2 DEBUG nova.network.os_vif_util [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.215 2 DEBUG nova.network.os_vif_util [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.219 2 DEBUG nova.virt.libvirt.guest [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.225 2 DEBUG nova.virt.libvirt.guest [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <name>instance-00000003</name>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <uuid>33651582-07e4-4ebc-8cd7-74903789e983</uuid>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:26:01</nova:creationTime>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <memory unit='KiB'>131072</memory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <vcpu placement='static'>1</vcpu>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <resource>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <partition>/machine</partition>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </resource>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <sysinfo type='smbios'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='manufacturer'>RDO</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='product'>OpenStack Compute</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='serial'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='uuid'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='family'>Virtual Machine</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <boot dev='hd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <smbios mode='sysinfo'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <vmcoreinfo state='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <cpu mode='custom' match='exact' check='full'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <vendor>AMD</vendor>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='x2apic'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc-deadline'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='hypervisor'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc_adjust'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='spec-ctrl'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='stibp'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='arch-capabilities'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='cmp_legacy'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='overflow-recov'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='succor'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='ibrs'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='amd-ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='virt-ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='lbrv'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='tsc-scale'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='vmcb-clean'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='flushbyasid'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pause-filter'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pfthreshold'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='rdctl-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='mds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='gds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='rfds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='xsaves'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svm'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='topoext'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='npt'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='nrip-save'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <clock offset='utc'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='pit' tickpolicy='delay'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='hpet' present='no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_poweroff>destroy</on_poweroff>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_reboot>restart</on_reboot>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_crash>destroy</on_crash>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <disk type='network' device='disk'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk' index='2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='vda' bus='virtio'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='virtio-disk0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <disk type='network' device='cdrom'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk.config' index='1'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='sda' bus='sata'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <readonly/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='sata0-0-0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='0' model='pcie-root'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pcie.0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='1' port='0x10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='2' port='0x11'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='3' port='0x12'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='4' port='0x13'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='5' port='0x14'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='6' port='0x15'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='7' port='0x16'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='8' port='0x17'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.8'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='9' port='0x18'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.9'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='10' port='0x19'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='11' port='0x1a'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.11'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='12' port='0x1b'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.12'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='13' port='0x1c'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.13'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='14' port='0x1d'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.14'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='15' port='0x1e'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.15'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='16' port='0x1f'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.16'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='17' port='0x20'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.17'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='18' port='0x21'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.18'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='19' port='0x22'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.19'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='20' port='0x23'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.20'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='21' port='0x24'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.21'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='22' port='0x25'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.22'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='23' port='0x26'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.23'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='24' port='0x27'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.24'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='25' port='0x28'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.25'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-pci-bridge'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.26'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='usb'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='sata' index='0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='ide'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <interface type='ethernet'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <mac address='fa:16:3e:90:b8:84'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='tap0c5e7571-52'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model type='virtio'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='vhost' rx_queue_size='512'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <mtu size='1442'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='net0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <serial type='pty'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target type='isa-serial' port='0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <model name='isa-serial'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </target>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <console type='pty' tty='/dev/pts/0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target type='serial' port='0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </console>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='tablet' bus='usb'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='usb' bus='0' port='1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='mouse' bus='ps2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='keyboard' bus='ps2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <listen type='address' address='::0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <audio id='1' type='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model type='virtio' heads='1' primary='yes'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='video0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <watchdog model='itco' action='reset'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='watchdog0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </watchdog>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <memballoon model='virtio'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <stats period='10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='balloon0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <rng model='virtio'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <backend model='random'>/dev/urandom</backend>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='rng0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <label>system_u:system_r:svirt_t:s0:c290,c929</label>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c929</imagelabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <label>+107:+107</label>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <imagelabel>+107:+107</imagelabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.225 2 DEBUG nova.virt.libvirt.guest [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.231 2 DEBUG nova.virt.libvirt.guest [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:30:8d:7d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4957103a-6a"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <name>instance-00000003</name>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <uuid>33651582-07e4-4ebc-8cd7-74903789e983</uuid>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:26:01</nova:creationTime>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <memory unit='KiB'>131072</memory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <vcpu placement='static'>1</vcpu>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <resource>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <partition>/machine</partition>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </resource>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <sysinfo type='smbios'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='manufacturer'>RDO</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='product'>OpenStack Compute</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='serial'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='uuid'>33651582-07e4-4ebc-8cd7-74903789e983</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <entry name='family'>Virtual Machine</entry>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <boot dev='hd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <smbios mode='sysinfo'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <vmcoreinfo state='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <cpu mode='custom' match='exact' check='full'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <vendor>AMD</vendor>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='x2apic'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc-deadline'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='hypervisor'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='tsc_adjust'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='spec-ctrl'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='stibp'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='arch-capabilities'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='cmp_legacy'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='overflow-recov'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='succor'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='ibrs'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='amd-ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='virt-ssbd'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='lbrv'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='tsc-scale'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='vmcb-clean'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='flushbyasid'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pause-filter'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='pfthreshold'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='rdctl-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='mds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='pschange-mc-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='gds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='rfds-no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='xsaves'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='svm'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='require' name='topoext'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='npt'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <feature policy='disable' name='nrip-save'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <clock offset='utc'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='pit' tickpolicy='delay'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <timer name='hpet' present='no'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_poweroff>destroy</on_poweroff>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_reboot>restart</on_reboot>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <on_crash>destroy</on_crash>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <disk type='network' device='disk'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk' index='2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='vda' bus='virtio'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='virtio-disk0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <disk type='network' device='cdrom'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='qemu' type='raw' cache='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <auth username='openstack'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <secret type='ceph' uuid='5adb8c35-1b74-5730-a252-62321f654cd5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source protocol='rbd' name='vms/33651582-07e4-4ebc-8cd7-74903789e983_disk.config' index='1'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.100' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.102' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <host name='192.168.122.101' port='6789'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='sda' bus='sata'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <readonly/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='sata0-0-0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='0' model='pcie-root'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pcie.0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='1' port='0x10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='2' port='0x11'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='3' port='0x12'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='4' port='0x13'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='5' port='0x14'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='6' port='0x15'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='7' port='0x16'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='8' port='0x17'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.8'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='9' port='0x18'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.9'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='10' port='0x19'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='11' port='0x1a'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.11'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='12' port='0x1b'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.12'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='13' port='0x1c'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.13'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='14' port='0x1d'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.14'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='15' port='0x1e'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.15'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='16' port='0x1f'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.16'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='17' port='0x20'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.17'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='18' port='0x21'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.18'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='19' port='0x22'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.19'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='20' port='0x23'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.20'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='21' port='0x24'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.21'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='22' port='0x25'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.22'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='23' port='0x26'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.23'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='24' port='0x27'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.24'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-root-port'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target chassis='25' port='0x28'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.25'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model name='pcie-pci-bridge'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='pci.26'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='usb'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <controller type='sata' index='0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='ide'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </controller>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <interface type='ethernet'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <mac address='fa:16:3e:90:b8:84'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target dev='tap0c5e7571-52'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model type='virtio'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <driver name='vhost' rx_queue_size='512'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <mtu size='1442'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='net0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <serial type='pty'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target type='isa-serial' port='0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:        <model name='isa-serial'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      </target>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <console type='pty' tty='/dev/pts/0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <source path='/dev/pts/0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <log file='/var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983/console.log' append='off'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <target type='serial' port='0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='serial0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </console>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='tablet' bus='usb'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='usb' bus='0' port='1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='mouse' bus='ps2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input1'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <input type='keyboard' bus='ps2'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='input2'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </input>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <listen type='address' address='::0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </graphics>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <audio id='1' type='none'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <model type='virtio' heads='1' primary='yes'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='video0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <watchdog model='itco' action='reset'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='watchdog0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </watchdog>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <memballoon model='virtio'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <stats period='10'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='balloon0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <rng model='virtio'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <backend model='random'>/dev/urandom</backend>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <alias name='rng0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <label>system_u:system_r:svirt_t:s0:c290,c929</label>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c929</imagelabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <label>+107:+107</label>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <imagelabel>+107:+107</imagelabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </seclabel>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.231 2 WARNING nova.virt.libvirt.driver [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Detaching interface fa:16:3e:30:8d:7d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4957103a-6a' not found.#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.232 2 DEBUG nova.virt.libvirt.vif [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.233 2 DEBUG nova.network.os_vif_util [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Converting VIF {"id": "4957103a-6a21-4535-9c0e-541b9fd3326d", "address": "fa:16:3e:30:8d:7d", "network": {"id": "44aed212-836a-4e2f-8b2a-57d636f542a7", "bridge": "br-int", "label": "tempest-network-smoke--400142137", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4957103a-6a", "ovs_interfaceid": "4957103a-6a21-4535-9c0e-541b9fd3326d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.234 2 DEBUG nova.network.os_vif_util [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.234 2 DEBUG os_vif [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4957103a-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.240 2 INFO os_vif [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:8d:7d,bridge_name='br-int',has_traffic_filtering=True,id=4957103a-6a21-4535-9c0e-541b9fd3326d,network=Network(44aed212-836a-4e2f-8b2a-57d636f542a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4957103a-6a')#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.241 2 DEBUG nova.virt.libvirt.guest [req-62437796-9ec5-4c04-99e6-5627830bb9f8 req-fbfe411f-2575-4d35-9411-540aa90b8b0f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:name>tempest-TestNetworkBasicOps-server-268251449</nova:name>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:creationTime>2025-10-12 21:26:04</nova:creationTime>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:flavor name="m1.nano">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:memory>128</nova:memory>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:disk>1</nova:disk>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:swap>0</nova:swap>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:vcpus>1</nova:vcpus>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:flavor>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:owner>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  <nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    <nova:port uuid="0c5e7571-52d2-44ba-9b10-914d5d4b6dcb">
Oct 12 17:26:04 np0005481680 nova_compute[264665]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:    </nova:port>
Oct 12 17:26:04 np0005481680 nova_compute[264665]:  </nova:ports>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: </nova:instance>
Oct 12 17:26:04 np0005481680 nova_compute[264665]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 12 17:26:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.642 2 INFO nova.network.neutron [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Port 4957103a-6a21-4535-9c0e-541b9fd3326d from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.642 2 DEBUG nova.network.neutron [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:26:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:04.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.667 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:26:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:04 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.692 2 DEBUG oslo_concurrency.lockutils [None req-4c081bef-ecb8-4ec9-a9ad-ef04f38a14b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "interface-33651582-07e4-4ebc-8cd7-74903789e983-4957103a-6a21-4535-9c0e-541b9fd3326d" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.824 2 DEBUG nova.compute.manager [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.824 2 DEBUG nova.compute.manager [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing instance network info cache due to event network-changed-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.825 2 DEBUG oslo_concurrency.lockutils [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.825 2 DEBUG oslo_concurrency.lockutils [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.825 2 DEBUG nova.network.neutron [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Refreshing network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.901 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.901 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.902 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.902 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.905 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.906 2 INFO nova.compute.manager [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Terminating instance#033[00m
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.908 2 DEBUG nova.compute.manager [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:26:04 np0005481680 kernel: tap0c5e7571-52 (unregistering): left promiscuous mode
Oct 12 17:26:04 np0005481680 NetworkManager[44859]: <info>  [1760304364.9803] device (tap0c5e7571-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:04 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:04Z|00054|binding|INFO|Releasing lport 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb from this chassis (sb_readonly=0)
Oct 12 17:26:04 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:04Z|00055|binding|INFO|Setting lport 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb down in Southbound
Oct 12 17:26:04 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:04Z|00056|binding|INFO|Removing iface tap0c5e7571-52 ovn-installed in OVS
Oct 12 17:26:04 np0005481680 nova_compute[264665]: 2025-10-12 21:26:04.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.003 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:b8:84 10.100.0.11'], port_security=['fa:16:3e:90:b8:84 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '33651582-07e4-4ebc-8cd7-74903789e983', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94f6889e-47b5-40e5-a758-6153d625c1cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccfa101b-afca-486c-8c0f-cd96615ea67e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd74f6a0-f3bd-4453-9a4f-5d8ee236e898, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.005 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb in datapath 94f6889e-47b5-40e5-a758-6153d625c1cd unbound from our chassis#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.007 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94f6889e-47b5-40e5-a758-6153d625c1cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.008 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f4b998-ba81-4715-86ee-5e56cbfe001e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.009 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd namespace which is not needed anymore#033[00m
Oct 12 17:26:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 12 17:26:05 np0005481680 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.968s CPU time.
Oct 12 17:26:05 np0005481680 systemd-machined[218338]: Machine qemu-2-instance-00000003 terminated.
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.160 2 INFO nova.virt.libvirt.driver [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Instance destroyed successfully.#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.161 2 DEBUG nova.objects.instance [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid 33651582-07e4-4ebc-8cd7-74903789e983 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.179 2 DEBUG nova.virt.libvirt.vif [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-268251449',display_name='tempest-TestNetworkBasicOps-server-268251449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-268251449',id=3,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsFcE5AMKtIFPZMjKJPN+iEPh7gRY29aaAJ0j6GSbcBqr8lUPzAtSKSCyX62X7YORTNjurdqBG1QPJAxo1kLNoWphhgW8JYvlxxD73Bp1+E6Va7DjWR8NUzIRibYRV06w==',key_name='tempest-TestNetworkBasicOps-1769524992',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:25:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-krgvymv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:25:30Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=33651582-07e4-4ebc-8cd7-74903789e983,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.179 2 DEBUG nova.network.os_vif_util [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.180 2 DEBUG nova.network.os_vif_util [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.181 2 DEBUG os_vif [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c5e7571-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [NOTICE]   (273870) : haproxy version is 2.8.14-c23fe91
Oct 12 17:26:05 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [NOTICE]   (273870) : path to executable is /usr/sbin/haproxy
Oct 12 17:26:05 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [WARNING]  (273870) : Exiting Master process...
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.236 2 INFO os_vif [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:b8:84,bridge_name='br-int',has_traffic_filtering=True,id=0c5e7571-52d2-44ba-9b10-914d5d4b6dcb,network=Network(94f6889e-47b5-40e5-a758-6153d625c1cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5e7571-52')#033[00m
Oct 12 17:26:05 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [ALERT]    (273870) : Current worker (273872) exited with code 143 (Terminated)
Oct 12 17:26:05 np0005481680 neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd[273866]: [WARNING]  (273870) : All workers exited. Exiting... (0)
Oct 12 17:26:05 np0005481680 systemd[1]: libpod-0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0.scope: Deactivated successfully.
Oct 12 17:26:05 np0005481680 podman[274296]: 2025-10-12 21:26:05.242932507 +0000 UTC m=+0.073944501 container died 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:26:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0-userdata-shm.mount: Deactivated successfully.
Oct 12 17:26:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cebf4d276202073613beb8037d4f3f69ddbca5c62d4c7a0310ce0ab4fdf28f72-merged.mount: Deactivated successfully.
Oct 12 17:26:05 np0005481680 podman[274296]: 2025-10-12 21:26:05.301186923 +0000 UTC m=+0.132198927 container cleanup 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:26:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:05.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:05 np0005481680 systemd[1]: libpod-conmon-0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0.scope: Deactivated successfully.
Oct 12 17:26:05 np0005481680 podman[274347]: 2025-10-12 21:26:05.403282097 +0000 UTC m=+0.068519962 container remove 0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 12 17:26:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:05 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.414 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f4856b06-6036-4ab5-86af-78f22dc0d701]: (4, ('Sun Oct 12 09:26:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd (0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0)\n0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0\nSun Oct 12 09:26:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd (0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0)\n0c16fb6c6d7be217efa9b8c0019b3dc1ce5eed52d54b06e969390b00189d4bd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.418 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[cff70579-a1d4-408c-972a-2fe72a1bcdaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.420 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94f6889e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 kernel: tap94f6889e-40: left promiscuous mode
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.457 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8074399c-fffa-42af-9ca2-fbcbef0c53c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.499 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8081d7ad-aed9-475c-bfe4-cecb5cd8c18d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.501 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5b3590-a2f9-4738-ad23-2154e920962d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.529 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[498c000e-ea40-4559-beb8-cce547b24efa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399603, 'reachable_time': 38438, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274367, 'error': None, 'target': 'ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 systemd[1]: run-netns-ovnmeta\x2d94f6889e\x2d47b5\x2d40e5\x2da758\x2d6153d625c1cd.mount: Deactivated successfully.
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.532 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94f6889e-47b5-40e5-a758-6153d625c1cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.532 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0d41dd-8ee7-422c-8e86-f0f2e531839a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:26:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:05.642 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.826 2 INFO nova.virt.libvirt.driver [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Deleting instance files /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983_del#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.828 2 INFO nova.virt.libvirt.driver [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Deletion of /var/lib/nova/instances/33651582-07e4-4ebc-8cd7-74903789e983_del complete#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.893 2 INFO nova.compute.manager [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.894 2 DEBUG oslo.service.loopingcall [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.895 2 DEBUG nova.compute.manager [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:26:05 np0005481680 nova_compute[264665]: 2025-10-12 21:26:05.895 2 DEBUG nova.network.neutron [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.174 2 DEBUG nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-unplugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.174 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.177 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.178 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.178 2 DEBUG nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-unplugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.179 2 DEBUG nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-unplugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.179 2 DEBUG nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.179 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "33651582-07e4-4ebc-8cd7-74903789e983-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.180 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.180 2 DEBUG oslo_concurrency.lockutils [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.180 2 DEBUG nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] No waiting events found dispatching network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.181 2 WARNING nova.compute.manager [req-585474fd-cdf6-47fa-a3a5-00228978b6a9 req-c8d9d92f-3baa-4e1e-85ec-a09f1d34668c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received unexpected event network-vif-plugged-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb for instance with vm_state active and task_state deleting.#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.183 2 DEBUG nova.network.neutron [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updated VIF entry in instance network info cache for port 0c5e7571-52d2-44ba-9b10-914d5d4b6dcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.183 2 DEBUG nova.network.neutron [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [{"id": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "address": "fa:16:3e:90:b8:84", "network": {"id": "94f6889e-47b5-40e5-a758-6153d625c1cd", "bridge": "br-int", "label": "tempest-network-smoke--632017084", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5e7571-52", "ovs_interfaceid": "0c5e7571-52d2-44ba-9b10-914d5d4b6dcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.203 2 DEBUG oslo_concurrency.lockutils [req-c4334e4f-ff6c-47fd-a596-6ca50fe5fd5e req-5cf0f77c-4480-4ab4-aa8d-d1e7aa5669f6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-33651582-07e4-4ebc-8cd7-74903789e983" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:26:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.648 2 DEBUG nova.network.neutron [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:26:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:06.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.667 2 INFO nova.compute.manager [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Took 0.77 seconds to deallocate network for instance.#033[00m
Oct 12 17:26:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:06 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.707 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.708 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.760 2 DEBUG oslo_concurrency.processutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:06 np0005481680 nova_compute[264665]: 2025-10-12 21:26:06.962 2 DEBUG nova.compute.manager [req-bfae154e-2080-4bcf-89b0-7a7e5b84c205 req-57520b57-254d-43c5-a343-480adb5d86e1 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Received event network-vif-deleted-0c5e7571-52d2-44ba-9b10-914d5d4b6dcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:26:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 12 17:26:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:07.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:26:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:26:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728621413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.256 2 DEBUG oslo_concurrency.processutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.265 2 DEBUG nova.compute.provider_tree [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.284 2 DEBUG nova.scheduler.client.report [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.307 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.332 2 INFO nova.scheduler.client.report [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance 33651582-07e4-4ebc-8cd7-74903789e983#033[00m
Oct 12 17:26:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:07 np0005481680 nova_compute[264665]: 2025-10-12 21:26:07.396 2 DEBUG oslo_concurrency.lockutils [None req-53ff413e-2852-46e3-829d-6efdf87c3c33 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "33651582-07e4-4ebc-8cd7-74903789e983" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:07 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:08.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:08 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a3a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct 12 17:26:09 np0005481680 podman[274394]: 2025-10-12 21:26:09.118340662 +0000 UTC m=+0.081465725 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.328003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369328051, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2152, "num_deletes": 251, "total_data_size": 4095587, "memory_usage": 4147664, "flush_reason": "Manual Compaction"}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 12 17:26:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:09.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369359347, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3991592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24631, "largest_seqno": 26782, "table_properties": {"data_size": 3982057, "index_size": 5965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19934, "raw_average_key_size": 20, "raw_value_size": 3962861, "raw_average_value_size": 4043, "num_data_blocks": 262, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304164, "oldest_key_time": 1760304164, "file_creation_time": 1760304369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 31413 microseconds, and 13361 cpu microseconds.
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.359413) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3991592 bytes OK
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.359443) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.363566) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.363588) EVENT_LOG_v1 {"time_micros": 1760304369363580, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.363612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4086884, prev total WAL file size 4086884, number of live WAL files 2.
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.365368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3898KB)], [56(11MB)]
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369365444, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16248502, "oldest_snapshot_seqno": -1}
Oct 12 17:26:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:09 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5775 keys, 14092704 bytes, temperature: kUnknown
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369475239, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14092704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14053693, "index_size": 23477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146853, "raw_average_key_size": 25, "raw_value_size": 13948962, "raw_average_value_size": 2415, "num_data_blocks": 960, "num_entries": 5775, "num_filter_entries": 5775, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.475719) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14092704 bytes
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.477790) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 128.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6295, records dropped: 520 output_compression: NoCompression
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.477825) EVENT_LOG_v1 {"time_micros": 1760304369477809, "job": 30, "event": "compaction_finished", "compaction_time_micros": 109933, "compaction_time_cpu_micros": 55512, "output_level": 6, "num_output_files": 1, "total_output_size": 14092704, "num_input_records": 6295, "num_output_records": 5775, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369479374, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304369483466, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.365227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.483511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.483517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.483521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.483524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:26:09.483528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:26:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212609 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:26:10 np0005481680 nova_compute[264665]: 2025-10-12 21:26:10.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:10 np0005481680 nova_compute[264665]: 2025-10-12 21:26:10.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:10.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:10 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.8 KiB/s wr, 29 op/s
Oct 12 17:26:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:11.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:11 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:11 np0005481680 podman[274661]: 2025-10-12 21:26:11.927440447 +0000 UTC m=+0.066195283 container create c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:26:11 np0005481680 systemd[1]: Started libpod-conmon-c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913.scope.
Oct 12 17:26:11 np0005481680 podman[274661]: 2025-10-12 21:26:11.900471243 +0000 UTC m=+0.039226129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:12 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:12] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:26:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:12] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:26:12 np0005481680 podman[274661]: 2025-10-12 21:26:12.02719641 +0000 UTC m=+0.165951286 container init c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:12 np0005481680 podman[274661]: 2025-10-12 21:26:12.042600166 +0000 UTC m=+0.181355002 container start c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 12 17:26:12 np0005481680 podman[274661]: 2025-10-12 21:26:12.046560118 +0000 UTC m=+0.185315004 container attach c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:12 np0005481680 sweet_chebyshev[274677]: 167 167
Oct 12 17:26:12 np0005481680 systemd[1]: libpod-c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913.scope: Deactivated successfully.
Oct 12 17:26:12 np0005481680 podman[274661]: 2025-10-12 21:26:12.052766637 +0000 UTC m=+0.191521483 container died c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:26:12 np0005481680 systemd[1]: var-lib-containers-storage-overlay-82e22ff0895dc29d8df504098b986eec2298c51d7dccfb63b5045f35cce7d27b-merged.mount: Deactivated successfully.
Oct 12 17:26:12 np0005481680 podman[274661]: 2025-10-12 21:26:12.11511392 +0000 UTC m=+0.253868766 container remove c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_chebyshev, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:12 np0005481680 nova_compute[264665]: 2025-10-12 21:26:12.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:12 np0005481680 systemd[1]: libpod-conmon-c56bce37275346e38d01051b8d866493b3c5cae4fa83c6f42e1b06951eb74913.scope: Deactivated successfully.
Oct 12 17:26:12 np0005481680 nova_compute[264665]: 2025-10-12 21:26:12.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:12 np0005481680 podman[274704]: 2025-10-12 21:26:12.365663488 +0000 UTC m=+0.065069153 container create c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:26:12 np0005481680 systemd[1]: Started libpod-conmon-c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1.scope.
Oct 12 17:26:12 np0005481680 podman[274704]: 2025-10-12 21:26:12.342344199 +0000 UTC m=+0.041749914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:12 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632dc6fdfa337d8c53226651067f7560c9a7c513c59abd39bd8b73368ffa128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632dc6fdfa337d8c53226651067f7560c9a7c513c59abd39bd8b73368ffa128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632dc6fdfa337d8c53226651067f7560c9a7c513c59abd39bd8b73368ffa128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:12 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632dc6fdfa337d8c53226651067f7560c9a7c513c59abd39bd8b73368ffa128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:12 np0005481680 podman[274704]: 2025-10-12 21:26:12.472929064 +0000 UTC m=+0.172334779 container init c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:26:12 np0005481680 podman[274704]: 2025-10-12 21:26:12.485172649 +0000 UTC m=+0.184578324 container start c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:26:12 np0005481680 podman[274704]: 2025-10-12 21:26:12.488434872 +0000 UTC m=+0.187840547 container attach c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:26:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:12.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:12 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:13.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]: [
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:    {
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "available": false,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "being_replaced": false,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "ceph_device_lvm": false,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "lsm_data": {},
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "lvs": [],
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "path": "/dev/sr0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "rejected_reasons": [
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "Has a FileSystem",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "Insufficient space (<5GB)"
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        ],
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        "sys_api": {
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "actuators": null,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "device_nodes": [
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:                "sr0"
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            ],
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "devname": "sr0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "human_readable_size": "482.00 KB",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "id_bus": "ata",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "model": "QEMU DVD-ROM",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "nr_requests": "2",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "parent": "/dev/sr0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "partitions": {},
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "path": "/dev/sr0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "removable": "1",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "rev": "2.5+",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "ro": "0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "rotational": "0",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "sas_address": "",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "sas_device_handle": "",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "scheduler_mode": "mq-deadline",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "sectors": 0,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "sectorsize": "2048",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "size": 493568.0,
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "support_discard": "2048",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "type": "disk",
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:            "vendor": "QEMU"
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:        }
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]:    }
Oct 12 17:26:13 np0005481680 sweet_satoshi[274721]: ]
Oct 12 17:26:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:13 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:13 np0005481680 systemd[1]: libpod-c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1.scope: Deactivated successfully.
Oct 12 17:26:13 np0005481680 podman[274704]: 2025-10-12 21:26:13.420397101 +0000 UTC m=+1.119802806 container died c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d632dc6fdfa337d8c53226651067f7560c9a7c513c59abd39bd8b73368ffa128-merged.mount: Deactivated successfully.
Oct 12 17:26:13 np0005481680 podman[274704]: 2025-10-12 21:26:13.487488325 +0000 UTC m=+1.186894020 container remove c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:26:13 np0005481680 systemd[1]: libpod-conmon-c8b09671aac53ee87e37651d2fa689b0a8e71c8ba2cb787bb20721497927d0c1.scope: Deactivated successfully.
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:26:13 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.324414302 +0000 UTC m=+0.065129955 container create 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:26:14 np0005481680 systemd[1]: Started libpod-conmon-49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184.scope.
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.29788073 +0000 UTC m=+0.038596433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.423107557 +0000 UTC m=+0.163823270 container init 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.4333187 +0000 UTC m=+0.174034353 container start 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.438719449 +0000 UTC m=+0.179435142 container attach 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:26:14 np0005481680 amazing_feistel[276057]: 167 167
Oct 12 17:26:14 np0005481680 systemd[1]: libpod-49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184.scope: Deactivated successfully.
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.441434848 +0000 UTC m=+0.182150491 container died 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:26:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-89c11046fec5e692be7d730a1829122c7b42e0a4df2e3fe60b57d1c4f0b5efe0-merged.mount: Deactivated successfully.
Oct 12 17:26:14 np0005481680 podman[276039]: 2025-10-12 21:26:14.500114856 +0000 UTC m=+0.240830509 container remove 49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:26:14 np0005481680 systemd[1]: libpod-conmon-49b6aa34f816752cce722c53a2c6551fca2e4a81b401bc789f4d662469b14184.scope: Deactivated successfully.
Oct 12 17:26:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaa4001470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:14 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:14 np0005481680 podman[276080]: 2025-10-12 21:26:14.761813461 +0000 UTC m=+0.067396672 container create 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:14 np0005481680 systemd[1]: Started libpod-conmon-2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2.scope.
Oct 12 17:26:14 np0005481680 podman[276080]: 2025-10-12 21:26:14.734545041 +0000 UTC m=+0.040128292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:14 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:14 np0005481680 podman[276080]: 2025-10-12 21:26:14.882679467 +0000 UTC m=+0.188262718 container init 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:26:14 np0005481680 podman[276080]: 2025-10-12 21:26:14.896191854 +0000 UTC m=+0.201775055 container start 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:26:14 np0005481680 podman[276080]: 2025-10-12 21:26:14.900998628 +0000 UTC m=+0.206581839 container attach 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:26:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Oct 12 17:26:15 np0005481680 nova_compute[264665]: 2025-10-12 21:26:15.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:15 np0005481680 ecstatic_margulis[276096]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:26:15 np0005481680 ecstatic_margulis[276096]: --> All data devices are unavailable
Oct 12 17:26:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:15.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:15 np0005481680 systemd[1]: libpod-2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2.scope: Deactivated successfully.
Oct 12 17:26:15 np0005481680 podman[276080]: 2025-10-12 21:26:15.380736376 +0000 UTC m=+0.686319577 container died 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:26:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c7fc5b8fb713226acc6ac38b45ecbce7d57b465c54f20b5d8c0064616cfc6078-merged.mount: Deactivated successfully.
Oct 12 17:26:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:15 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:15 np0005481680 podman[276080]: 2025-10-12 21:26:15.441389414 +0000 UTC m=+0.746972625 container remove 2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:26:15 np0005481680 nova_compute[264665]: 2025-10-12 21:26:15.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:15 np0005481680 systemd[1]: libpod-conmon-2194558fc9c4a61e621a80b7bcf28dd75f95ffb826b78beab03c22d67b1404b2.scope: Deactivated successfully.
Oct 12 17:26:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.231439706 +0000 UTC m=+0.072006271 container create ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:26:16 np0005481680 systemd[1]: Started libpod-conmon-ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4.scope.
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.201533657 +0000 UTC m=+0.042100282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.333116749 +0000 UTC m=+0.173683344 container init ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.347909279 +0000 UTC m=+0.188475854 container start ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.352662431 +0000 UTC m=+0.193229086 container attach ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:16 np0005481680 pedantic_shtern[276232]: 167 167
Oct 12 17:26:16 np0005481680 systemd[1]: libpod-ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4.scope: Deactivated successfully.
Oct 12 17:26:16 np0005481680 conmon[276232]: conmon ad8cd2a0c9f29d3492c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4.scope/container/memory.events
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.358980884 +0000 UTC m=+0.199547459 container died ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay-25caeb81ca0c8b4069e13050fc33ab0371b051b72fef53b4c92b400268ce3193-merged.mount: Deactivated successfully.
Oct 12 17:26:16 np0005481680 podman[276216]: 2025-10-12 21:26:16.41957488 +0000 UTC m=+0.260141445 container remove ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_shtern, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 17:26:16 np0005481680 systemd[1]: libpod-conmon-ad8cd2a0c9f29d3492c9d728cdc9955b1dd026d9d69b770db0511ea05a6ff7e4.scope: Deactivated successfully.
Oct 12 17:26:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:16.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:16 np0005481680 podman[276257]: 2025-10-12 21:26:16.683104492 +0000 UTC m=+0.074332581 container create ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:26:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:16 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:16 np0005481680 systemd[1]: Started libpod-conmon-ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7.scope.
Oct 12 17:26:16 np0005481680 podman[276257]: 2025-10-12 21:26:16.65501259 +0000 UTC m=+0.046240729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0792858a326e93afa72113073ca0cd76e7d1b666d08a7942c59e92446fa7647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0792858a326e93afa72113073ca0cd76e7d1b666d08a7942c59e92446fa7647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0792858a326e93afa72113073ca0cd76e7d1b666d08a7942c59e92446fa7647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0792858a326e93afa72113073ca0cd76e7d1b666d08a7942c59e92446fa7647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:16 np0005481680 podman[276257]: 2025-10-12 21:26:16.808759801 +0000 UTC m=+0.199987930 container init ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:26:16 np0005481680 podman[276257]: 2025-10-12 21:26:16.819783185 +0000 UTC m=+0.211011284 container start ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:26:16 np0005481680 podman[276257]: 2025-10-12 21:26:16.824541957 +0000 UTC m=+0.215770056 container attach ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:26:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]: {
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:    "0": [
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:        {
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "devices": [
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "/dev/loop3"
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            ],
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "lv_name": "ceph_lv0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "lv_size": "21470642176",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "name": "ceph_lv0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "tags": {
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.cluster_name": "ceph",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.crush_device_class": "",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.encrypted": "0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.osd_id": "0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.type": "block",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.vdo": "0",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:                "ceph.with_tpm": "0"
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            },
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "type": "block",
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:            "vg_name": "ceph_vg0"
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:        }
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]:    ]
Oct 12 17:26:17 np0005481680 inspiring_shaw[276273]: }
Oct 12 17:26:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:17.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:26:17 np0005481680 systemd[1]: libpod-ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7.scope: Deactivated successfully.
Oct 12 17:26:17 np0005481680 podman[276257]: 2025-10-12 21:26:17.203740781 +0000 UTC m=+0.594968880 container died ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:26:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d0792858a326e93afa72113073ca0cd76e7d1b666d08a7942c59e92446fa7647-merged.mount: Deactivated successfully.
Oct 12 17:26:17 np0005481680 podman[276257]: 2025-10-12 21:26:17.265224591 +0000 UTC m=+0.656452680 container remove ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:17 np0005481680 systemd[1]: libpod-conmon-ae45a282c6e498e1307e3257d7bf46514edaeda75178de8cd6afc6bb058888a7.scope: Deactivated successfully.
Oct 12 17:26:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:17.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:17 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.037855945 +0000 UTC m=+0.064177200 container create a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:26:18 np0005481680 systemd[1]: Started libpod-conmon-a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a.scope.
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.01118872 +0000 UTC m=+0.037510035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.141035387 +0000 UTC m=+0.167356682 container init a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.151227159 +0000 UTC m=+0.177548424 container start a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.155268713 +0000 UTC m=+0.181590028 container attach a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:26:18 np0005481680 interesting_perlman[276405]: 167 167
Oct 12 17:26:18 np0005481680 systemd[1]: libpod-a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a.scope: Deactivated successfully.
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.160402514 +0000 UTC m=+0.186723769 container died a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-119e2b830787c7ba197bd39701fe27322d7630d8e8d7d767c35f2c723b24cabe-merged.mount: Deactivated successfully.
Oct 12 17:26:18 np0005481680 podman[276388]: 2025-10-12 21:26:18.218828786 +0000 UTC m=+0.245150061 container remove a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:26:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:26:18 np0005481680 systemd[1]: libpod-conmon-a92f338b00bdf447fa7580ea42c8fed369d749ec74a08f6f065a02f0b7e1cd7a.scope: Deactivated successfully.
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:26:18
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:26:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:26:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:26:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:18.363 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:18.363 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:26:18.364 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:18 np0005481680 podman[276430]: 2025-10-12 21:26:18.484263487 +0000 UTC m=+0.071707075 container create 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:26:18 np0005481680 systemd[1]: Started libpod-conmon-4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7.scope.
Oct 12 17:26:18 np0005481680 podman[276430]: 2025-10-12 21:26:18.456592465 +0000 UTC m=+0.044036103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:26:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241320a6f9dd31bc7ac5ec5433e9ffb806509d2333b2c44a82a29e34b99f89bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241320a6f9dd31bc7ac5ec5433e9ffb806509d2333b2c44a82a29e34b99f89bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241320a6f9dd31bc7ac5ec5433e9ffb806509d2333b2c44a82a29e34b99f89bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:18 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241320a6f9dd31bc7ac5ec5433e9ffb806509d2333b2c44a82a29e34b99f89bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:18 np0005481680 podman[276430]: 2025-10-12 21:26:18.578520959 +0000 UTC m=+0.165964527 container init 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:26:18 np0005481680 podman[276430]: 2025-10-12 21:26:18.594834598 +0000 UTC m=+0.182278186 container start 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:26:18 np0005481680 podman[276430]: 2025-10-12 21:26:18.599324263 +0000 UTC m=+0.186767811 container attach 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:26:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa8c002fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:18.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:18 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa80001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:26:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:26:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:26:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:19.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:19 np0005481680 lvm[276523]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:26:19 np0005481680 lvm[276523]: VG ceph_vg0 finished
Oct 12 17:26:19 np0005481680 vigilant_carver[276447]: {}
Oct 12 17:26:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:19 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:19 np0005481680 podman[276430]: 2025-10-12 21:26:19.449760376 +0000 UTC m=+1.037203964 container died 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:26:19 np0005481680 systemd[1]: libpod-4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7.scope: Deactivated successfully.
Oct 12 17:26:19 np0005481680 systemd[1]: libpod-4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7.scope: Consumed 1.576s CPU time.
Oct 12 17:26:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-241320a6f9dd31bc7ac5ec5433e9ffb806509d2333b2c44a82a29e34b99f89bb-merged.mount: Deactivated successfully.
Oct 12 17:26:19 np0005481680 podman[276430]: 2025-10-12 21:26:19.509789869 +0000 UTC m=+1.097233457 container remove 4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_carver, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:26:19 np0005481680 systemd[1]: libpod-conmon-4727edc2e127f04d738984a8d01fdb55763913be5d2e05ba2f70c6344b9224c7.scope: Deactivated successfully.
Oct 12 17:26:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:26:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:26:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:20 np0005481680 nova_compute[264665]: 2025-10-12 21:26:20.156 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304365.1548803, 33651582-07e4-4ebc-8cd7-74903789e983 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:26:20 np0005481680 nova_compute[264665]: 2025-10-12 21:26:20.156 2 INFO nova.compute.manager [-] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:26:20 np0005481680 nova_compute[264665]: 2025-10-12 21:26:20.174 2 DEBUG nova.compute.manager [None req-36abbd60-5dbe-4bd8-8d90-00ef8b9bba27 - - - - - -] [instance: 33651582-07e4-4ebc-8cd7-74903789e983] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:26:20 np0005481680 nova_compute[264665]: 2025-10-12 21:26:20.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:26:20 np0005481680 nova_compute[264665]: 2025-10-12 21:26:20.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:20 np0005481680 kernel: ganesha.nfsd[272341]: segfault at 50 ip 00007fab65faa32e sp 00007fab19ffa210 error 4 in libntirpc.so.5.8[7fab65f8f000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 12 17:26:20 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:26:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[267711]: 12/10/2025 21:26:20 : epoch 68ec1bd9 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faabc00a460 fd 48 proxy ignored for local
Oct 12 17:26:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:20 np0005481680 systemd[1]: Started Process Core Dump (PID 276565/UID 0).
Oct 12 17:26:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 12 17:26:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:21.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:22] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:26:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:22] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:26:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:22.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:22 np0005481680 podman[276593]: 2025-10-12 21:26:22.690665818 +0000 UTC m=+0.098395009 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:26:22 np0005481680 podman[276594]: 2025-10-12 21:26:22.740366044 +0000 UTC m=+0.145083998 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 12 17:26:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 12 17:26:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:23 np0005481680 systemd-coredump[276566]: Process 267715 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 67:#012#0  0x00007fab65faa32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:26:23 np0005481680 systemd[1]: systemd-coredump@13-276565-0.service: Deactivated successfully.
Oct 12 17:26:23 np0005481680 systemd[1]: systemd-coredump@13-276565-0.service: Consumed 1.243s CPU time.
Oct 12 17:26:23 np0005481680 podman[276645]: 2025-10-12 21:26:23.770164417 +0000 UTC m=+0.043197391 container died 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Oct 12 17:26:23 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ed3fbc2af06840f5470a3dc9e793d13a9ffa7d148cc58e492aad471012057e51-merged.mount: Deactivated successfully.
Oct 12 17:26:23 np0005481680 podman[276645]: 2025-10-12 21:26:23.82477955 +0000 UTC m=+0.097812494 container remove 7015d5a6a75b26624c15d8868aec78b735e117cccd39dceff60a1737dbf7cc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:26:23 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:26:24 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:26:24 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.385s CPU time.
Oct 12 17:26:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 42 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 4.2 KiB/s wr, 10 op/s
Oct 12 17:26:25 np0005481680 nova_compute[264665]: 2025-10-12 21:26:25.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:25 np0005481680 nova_compute[264665]: 2025-10-12 21:26:25.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 42 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 4.2 KiB/s wr, 10 op/s
Oct 12 17:26:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:27.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:26:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:27.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:26:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:27.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:26:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212628 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:26:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:28 np0005481680 podman[276692]: 2025-10-12 21:26:28.753272296 +0000 UTC m=+0.068492491 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:26:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 42 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 4.2 KiB/s wr, 10 op/s
Oct 12 17:26:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212629 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:26:30 np0005481680 nova_compute[264665]: 2025-10-12 21:26:30.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:30 np0005481680 nova_compute[264665]: 2025-10-12 21:26:30.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:30.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 12 17:26:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:31.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:32] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:26:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:32] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:26:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212632 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:26:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 12 17:26:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:26:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:26:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:34 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 14.
Oct 12 17:26:34 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:26:34 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.385s CPU time.
Oct 12 17:26:34 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:26:34 np0005481680 podman[276772]: 2025-10-12 21:26:34.671580438 +0000 UTC m=+0.084117712 container create c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 17:26:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:34 np0005481680 podman[276772]: 2025-10-12 21:26:34.63235707 +0000 UTC m=+0.044894404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:26:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae12985c54986ac32a576053f044bb69f037c15fde475f55d7cc68015ebb6f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae12985c54986ac32a576053f044bb69f037c15fde475f55d7cc68015ebb6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae12985c54986ac32a576053f044bb69f037c15fde475f55d7cc68015ebb6f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae12985c54986ac32a576053f044bb69f037c15fde475f55d7cc68015ebb6f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:26:34 np0005481680 podman[276772]: 2025-10-12 21:26:34.750986359 +0000 UTC m=+0.163523713 container init c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Oct 12 17:26:34 np0005481680 podman[276772]: 2025-10-12 21:26:34.767098873 +0000 UTC m=+0.179636177 container start c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:26:34 np0005481680 bash[276772]: c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79
Oct 12 17:26:34 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:26:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:34 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:26:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 12 17:26:35 np0005481680 nova_compute[264665]: 2025-10-12 21:26:35.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:35.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:35 np0005481680 nova_compute[264665]: 2025-10-12 21:26:35.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.452 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.452 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.453 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.453 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.683 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.684 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.684 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.684 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:26:36 np0005481680 nova_compute[264665]: 2025-10-12 21:26:36.685 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 12 17:26:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:37.192Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:26:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:37.193Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:26:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:37.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:26:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:26:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305728399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.241 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:37.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.506 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.508 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4640MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.509 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.510 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.610 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.611 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:26:37 np0005481680 nova_compute[264665]: 2025-10-12 21:26:37.637 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:26:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905468617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:26:38 np0005481680 nova_compute[264665]: 2025-10-12 21:26:38.139 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:38 np0005481680 nova_compute[264665]: 2025-10-12 21:26:38.146 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:26:38 np0005481680 nova_compute[264665]: 2025-10-12 21:26:38.166 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:26:38 np0005481680 nova_compute[264665]: 2025-10-12 21:26:38.187 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:26:38 np0005481680 nova_compute[264665]: 2025-10-12 21:26:38.188 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.185 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.185 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.186 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.186 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.203 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.204 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.205 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:39 np0005481680 nova_compute[264665]: 2025-10-12 21:26:39.205 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:26:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:40 np0005481680 podman[276880]: 2025-10-12 21:26:40.111622069 +0000 UTC m=+0.075607574 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 12 17:26:40 np0005481680 nova_compute[264665]: 2025-10-12 21:26:40.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:40 np0005481680 nova_compute[264665]: 2025-10-12 21:26:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:40 np0005481680 nova_compute[264665]: 2025-10-12 21:26:40.678 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:26:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:26:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:26:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:40 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:26:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:40 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:26:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 12 17:26:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:41.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:42] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 12 17:26:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:42] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Oct 12 17:26:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 12 17:26:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:43.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 12 17:26:45 np0005481680 nova_compute[264665]: 2025-10-12 21:26:45.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:45.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:45 np0005481680 nova_compute[264665]: 2025-10-12 21:26:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:26:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:46 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:26:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 12 17:26:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:26:47 np0005481680 ovn_controller[154617]: 2025-10-12T21:26:47Z|00057|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct 12 17:26:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:47 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:26:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:26:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:26:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:48 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:48 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 12 17:26:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:49.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:49 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:49 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:26:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:49 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:26:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:50 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:26:50 np0005481680 nova_compute[264665]: 2025-10-12 21:26:50.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:50 np0005481680 nova_compute[264665]: 2025-10-12 21:26:50.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212650 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:26:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:50 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:50 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 12 17:26:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:51 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:52] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:26:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:26:52] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:26:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:52 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212652 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:26:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:52 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:26:53 np0005481680 podman[276951]: 2025-10-12 21:26:53.133368686 +0000 UTC m=+0.092353614 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:26:53 np0005481680 podman[276952]: 2025-10-12 21:26:53.170829778 +0000 UTC m=+0.125263239 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:26:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:53 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:54 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:54 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 12 17:26:55 np0005481680 nova_compute[264665]: 2025-10-12 21:26:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:55.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:55 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:55 np0005481680 nova_compute[264665]: 2025-10-12 21:26:55.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:26:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:26:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:56 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:56 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 5 op/s
Oct 12 17:26:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:26:57.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:26:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:57.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:57 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.080 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.080 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.100 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.199 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.200 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.214 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.214 2 INFO nova.compute.claims [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.316 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:58 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:26:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:26:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:26:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:58 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:26:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392039849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.768 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.777 2 DEBUG nova.compute.provider_tree [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.794 2 DEBUG nova.scheduler.client.report [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.823 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.824 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.899 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.900 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.920 2 INFO nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:26:58 np0005481680 nova_compute[264665]: 2025-10-12 21:26:58.941 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:26:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 5 op/s
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.050 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.052 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.053 2 INFO nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Creating image(s)#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.091 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:26:59 np0005481680 podman[277026]: 2025-10-12 21:26:59.133162252 +0000 UTC m=+0.093490963 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.133 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.176 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.182 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.273 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.275 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.278 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.278 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.321 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.329 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:26:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:26:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:26:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:26:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:26:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:26:59 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.499 2 DEBUG nova.policy [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.683 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.783 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.932 2 DEBUG nova.objects.instance [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid 962a5d4f-4210-48cd-bfa7-d21430a1ad67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.948 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.949 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Ensure instance console log exists: /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.949 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.950 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:26:59 np0005481680 nova_compute[264665]: 2025-10-12 21:26:59.950 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:00 np0005481680 nova_compute[264665]: 2025-10-12 21:27:00.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:00 np0005481680 nova_compute[264665]: 2025-10-12 21:27:00.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:00.521 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:27:00 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:00.522 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:27:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:00 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:00 np0005481680 nova_compute[264665]: 2025-10-12 21:27:00.689 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Successfully created port: 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:27:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:27:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:00.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:27:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:00 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 143 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 763 KiB/s wr, 19 op/s
Oct 12 17:27:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:01 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:01 np0005481680 nova_compute[264665]: 2025-10-12 21:27:01.928 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Successfully updated port: 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:27:01 np0005481680 nova_compute[264665]: 2025-10-12 21:27:01.946 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:27:01 np0005481680 nova_compute[264665]: 2025-10-12 21:27:01.946 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:27:01 np0005481680 nova_compute[264665]: 2025-10-12 21:27:01.946 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:27:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:02] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:27:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:02] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:27:02 np0005481680 nova_compute[264665]: 2025-10-12 21:27:02.021 2 DEBUG nova.compute.manager [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-changed-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:02 np0005481680 nova_compute[264665]: 2025-10-12 21:27:02.021 2 DEBUG nova.compute.manager [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Refreshing instance network info cache due to event network-changed-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:27:02 np0005481680 nova_compute[264665]: 2025-10-12 21:27:02.022 2 DEBUG oslo_concurrency.lockutils [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:27:02 np0005481680 nova_compute[264665]: 2025-10-12 21:27:02.086 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:27:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:02 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:02 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.008 2 DEBUG nova.network.neutron [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updating instance_info_cache with network_info: [{"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.029 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.029 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Instance network_info: |[{"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.030 2 DEBUG oslo_concurrency.lockutils [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.030 2 DEBUG nova.network.neutron [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Refreshing network info cache for port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.035 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Start _get_guest_xml network_info=[{"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:27:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 143 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 762 KiB/s wr, 16 op/s
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.042 2 WARNING nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.047 2 DEBUG nova.virt.libvirt.host [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.047 2 DEBUG nova.virt.libvirt.host [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.055 2 DEBUG nova.virt.libvirt.host [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.056 2 DEBUG nova.virt.libvirt.host [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.057 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.057 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.058 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.058 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.058 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.058 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.059 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.059 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.059 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.060 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.060 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.060 2 DEBUG nova.virt.hardware [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.065 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:27:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:27:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:03.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:03 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:27:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/499607573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.576 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.607 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:27:03 np0005481680 nova_compute[264665]: 2025-10-12 21:27:03.611 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:27:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629543480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.099 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.102 2 DEBUG nova.virt.libvirt.vif [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629377975',display_name='tempest-TestNetworkBasicOps-server-629377975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629377975',id=5,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGbOMbhtURWz1LS3xDbelbt7uQkXcyZbn82/PMQq5agiJyDDLH1vN7lW01aAmEye4czjO03wXd2UKKnep63VO5NJSge2ooJydZLuTs3bAgJwWPzKFup6mSurGYYMAA8R9A==',key_name='tempest-TestNetworkBasicOps-379595566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-mllk8rww',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:26:58Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=962a5d4f-4210-48cd-bfa7-d21430a1ad67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.103 2 DEBUG nova.network.os_vif_util [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.105 2 DEBUG nova.network.os_vif_util [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.107 2 DEBUG nova.objects.instance [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 962a5d4f-4210-48cd-bfa7-d21430a1ad67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.132 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <uuid>962a5d4f-4210-48cd-bfa7-d21430a1ad67</uuid>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <name>instance-00000005</name>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-629377975</nova:name>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:27:03</nova:creationTime>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <nova:port uuid="53bf1e1d-d55c-4c25-9bc8-45ac20b479a1">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="serial">962a5d4f-4210-48cd-bfa7-d21430a1ad67</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="uuid">962a5d4f-4210-48cd-bfa7-d21430a1ad67</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:80:65:30"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <target dev="tap53bf1e1d-d5"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/console.log" append="off"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:27:04 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:27:04 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:27:04 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:27:04 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.133 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Preparing to wait for external event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.134 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.134 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.135 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.136 2 DEBUG nova.virt.libvirt.vif [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629377975',display_name='tempest-TestNetworkBasicOps-server-629377975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629377975',id=5,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGbOMbhtURWz1LS3xDbelbt7uQkXcyZbn82/PMQq5agiJyDDLH1vN7lW01aAmEye4czjO03wXd2UKKnep63VO5NJSge2ooJydZLuTs3bAgJwWPzKFup6mSurGYYMAA8R9A==',key_name='tempest-TestNetworkBasicOps-379595566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-mllk8rww',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:26:58Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=962a5d4f-4210-48cd-bfa7-d21430a1ad67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.136 2 DEBUG nova.network.os_vif_util [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.137 2 DEBUG nova.network.os_vif_util [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.138 2 DEBUG os_vif [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53bf1e1d-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.146 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53bf1e1d-d5, col_values=(('external_ids', {'iface-id': '53bf1e1d-d55c-4c25-9bc8-45ac20b479a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:65:30', 'vm-uuid': '962a5d4f-4210-48cd-bfa7-d21430a1ad67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:04 np0005481680 NetworkManager[44859]: <info>  [1760304424.1902] manager: (tap53bf1e1d-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.204 2 INFO os_vif [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5')#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.273 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.274 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.274 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:80:65:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.275 2 INFO nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Using config drive#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.315 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:27:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:04 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:04 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.813 2 DEBUG nova.network.neutron [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updated VIF entry in instance network info cache for port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.814 2 DEBUG nova.network.neutron [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updating instance_info_cache with network_info: [{"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.844 2 DEBUG oslo_concurrency.lockutils [req-82e58c4f-6dd7-407d-a7f9-1dac1f5c0b19 req-77d3f478-6ba0-4432-8b94-be35c79241a4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.927 2 INFO nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Creating config drive at /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config#033[00m
Oct 12 17:27:04 np0005481680 nova_compute[264665]: 2025-10-12 21:27:04.934 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv265mqi6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.084 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv265mqi6" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.126 2 DEBUG nova.storage.rbd_utils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.130 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.336 2 DEBUG oslo_concurrency.processutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config 962a5d4f-4210-48cd-bfa7-d21430a1ad67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.337 2 INFO nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Deleting local config drive /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67/disk.config because it was imported into RBD.#033[00m
Oct 12 17:27:05 np0005481680 kernel: tap53bf1e1d-d5: entered promiscuous mode
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.4037] manager: (tap53bf1e1d-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 12 17:27:05 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:05Z|00058|binding|INFO|Claiming lport 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 for this chassis.
Oct 12 17:27:05 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:05Z|00059|binding|INFO|53bf1e1d-d55c-4c25-9bc8-45ac20b479a1: Claiming fa:16:3e:80:65:30 10.100.0.11
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:05.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.453 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:65:30 10.100.0.11'], port_security=['fa:16:3e:80:65:30 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '962a5d4f-4210-48cd-bfa7-d21430a1ad67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3baff2e-9660-47cd-b25c-715893014d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': '598bfc91-f974-4495-8850-14bbbd2703ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=257702e8-53ce-4ab0-a540-fa67b1e3dfab, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.454 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 in datapath a3baff2e-9660-47cd-b25c-715893014d3c bound to our chassis#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.455 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3baff2e-9660-47cd-b25c-715893014d3c#033[00m
Oct 12 17:27:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:05 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:05 np0005481680 systemd-machined[218338]: New machine qemu-3-instance-00000005.
Oct 12 17:27:05 np0005481680 systemd-udevd[277380]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.478 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bea7b2-2e3e-4764-bf35-1500608a5f2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.480 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3baff2e-91 in ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.482 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3baff2e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.482 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[4d326758-d8c1-417a-9295-6ae07f72ef1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.483 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[43b7bbe9-00ca-46b9-9d9a-ab4d948ecb3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.4920] device (tap53bf1e1d-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.4926] device (tap53bf1e1d-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.498 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[bba4808d-6ccb-4c2e-b14f-05bad76c3d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.523 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.529 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1140d4-33d7-4c0a-bb13-7206875b6173]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:05Z|00060|binding|INFO|Setting lport 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 ovn-installed in OVS
Oct 12 17:27:05 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:05Z|00061|binding|INFO|Setting lport 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 up in Southbound
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.576 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[35af2131-6f16-466f-bddc-a75171e67543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 systemd-udevd[277383]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.5839] manager: (tapa3baff2e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.583 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[57bd42ad-e0f8-48b4-b7cc-99fcb1350da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.638 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[cdabe06d-a3f4-47b1-9850-709d8fbe5505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.643 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8fb460-7fa8-4943-95d0-817a285ec1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.6858] device (tapa3baff2e-90): carrier: link connected
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.694 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[fd54b2c3-5c99-4d2d-beea-180f0d3afb26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.721 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5c89ac-48c0-4a1d-a4f9-0bfae8e8d0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3baff2e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:64:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409231, 'reachable_time': 16029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277413, 'error': None, 'target': 'ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.744 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[7e95d36f-d4bb-429d-8951-3b0ed79c4535]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:6404'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409231, 'tstamp': 409231}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277414, 'error': None, 'target': 'ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.775 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[1d32b821-4a9b-4def-9558-e0e021aa29ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3baff2e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:64:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409231, 'reachable_time': 16029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277415, 'error': None, 'target': 'ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.823 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[fca3f6ca-91cd-447c-88df-7665c108a234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.904 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[e56a1589-ab79-4c4c-881d-f7e7da04d4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.906 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3baff2e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.906 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.907 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3baff2e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 NetworkManager[44859]: <info>  [1760304425.9107] manager: (tapa3baff2e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 12 17:27:05 np0005481680 kernel: tapa3baff2e-90: entered promiscuous mode
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.913 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3baff2e-90, col_values=(('external_ids', {'iface-id': '86665791-7039-410b-9eb5-d9af056770a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:05Z|00062|binding|INFO|Releasing lport 86665791-7039-410b-9eb5-d9af056770a9 from this chassis (sb_readonly=0)
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.949 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3baff2e-9660-47cd-b25c-715893014d3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3baff2e-9660-47cd-b25c-715893014d3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.950 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d2d5a-f6aa-4537-aa65-513ba6762246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.951 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-a3baff2e-9660-47cd-b25c-715893014d3c
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/a3baff2e-9660-47cd-b25c-715893014d3c.pid.haproxy
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID a3baff2e-9660-47cd-b25c-715893014d3c
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:27:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:05.952 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c', 'env', 'PROCESS_TAG=haproxy-a3baff2e-9660-47cd-b25c-715893014d3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3baff2e-9660-47cd-b25c-715893014d3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.992 2 DEBUG nova.compute.manager [req-e2b9cea7-7406-47ad-b6fb-2f12456ba2c5 req-3c839265-7a64-4d42-acbf-c9a3bd26588f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.992 2 DEBUG oslo_concurrency.lockutils [req-e2b9cea7-7406-47ad-b6fb-2f12456ba2c5 req-3c839265-7a64-4d42-acbf-c9a3bd26588f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.993 2 DEBUG oslo_concurrency.lockutils [req-e2b9cea7-7406-47ad-b6fb-2f12456ba2c5 req-3c839265-7a64-4d42-acbf-c9a3bd26588f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.994 2 DEBUG oslo_concurrency.lockutils [req-e2b9cea7-7406-47ad-b6fb-2f12456ba2c5 req-3c839265-7a64-4d42-acbf-c9a3bd26588f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:05 np0005481680 nova_compute[264665]: 2025-10-12 21:27:05.994 2 DEBUG nova.compute.manager [req-e2b9cea7-7406-47ad-b6fb-2f12456ba2c5 req-3c839265-7a64-4d42-acbf-c9a3bd26588f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Processing event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:27:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:06 np0005481680 podman[277490]: 2025-10-12 21:27:06.416007928 +0000 UTC m=+0.076228689 container create 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:27:06 np0005481680 systemd[1]: Started libpod-conmon-438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0.scope.
Oct 12 17:27:06 np0005481680 podman[277490]: 2025-10-12 21:27:06.381353237 +0000 UTC m=+0.041574068 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:27:06 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:06 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183bc4ab4dfa7cc15132ea43d33d09efefdb2b6c62d0c91090b55d5931d3543c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:06 np0005481680 podman[277490]: 2025-10-12 21:27:06.527972875 +0000 UTC m=+0.188193656 container init 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 12 17:27:06 np0005481680 podman[277490]: 2025-10-12 21:27:06.533417295 +0000 UTC m=+0.193638046 container start 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 12 17:27:06 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [NOTICE]   (277509) : New worker (277511) forked
Oct 12 17:27:06 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [NOTICE]   (277509) : Loading success.
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.653 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304426.6526973, 962a5d4f-4210-48cd-bfa7-d21430a1ad67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.653 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] VM Started (Lifecycle Event)#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.655 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.658 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.662 2 INFO nova.virt.libvirt.driver [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Instance spawned successfully.#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.662 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:27:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:06 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0036d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.704 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.708 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.715 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.715 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.715 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.716 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.716 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.717 2 DEBUG nova.virt.libvirt.driver [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.735 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.735 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304426.654873, 962a5d4f-4210-48cd-bfa7-d21430a1ad67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.735 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:27:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:06 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.765 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.769 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304426.6573267, 962a5d4f-4210-48cd-bfa7-d21430a1ad67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.769 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.789 2 INFO nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.790 2 DEBUG nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.793 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.801 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.859 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.918 2 INFO nova.compute.manager [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Took 8.76 seconds to build instance.#033[00m
Oct 12 17:27:06 np0005481680 nova_compute[264665]: 2025-10-12 21:27:06.955 2 DEBUG oslo_concurrency.lockutils [None req-0df35b65-9fb0-456e-ba54-fef6b0a9764f 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:27:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:07.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:27:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:07.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:07 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.093 2 DEBUG nova.compute.manager [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.094 2 DEBUG oslo_concurrency.lockutils [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.094 2 DEBUG oslo_concurrency.lockutils [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.094 2 DEBUG oslo_concurrency.lockutils [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.094 2 DEBUG nova.compute.manager [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] No waiting events found dispatching network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:27:08 np0005481680 nova_compute[264665]: 2025-10-12 21:27:08.094 2 WARNING nova.compute.manager [req-7a2abcee-67d9-4fb9-8a74-2fe4cbfe4a7d req-593a97ef-4e71-4fe2-bf28-199fc180e438 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received unexpected event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 for instance with vm_state active and task_state None.#033[00m
Oct 12 17:27:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:08 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:08.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:08 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0036d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:27:09 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:09Z|00063|binding|INFO|Releasing lport 86665791-7039-410b-9eb5-d9af056770a9 from this chassis (sb_readonly=0)
Oct 12 17:27:09 np0005481680 nova_compute[264665]: 2025-10-12 21:27:09.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:09 np0005481680 NetworkManager[44859]: <info>  [1760304429.1384] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 12 17:27:09 np0005481680 NetworkManager[44859]: <info>  [1760304429.1392] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 12 17:27:09 np0005481680 nova_compute[264665]: 2025-10-12 21:27:09.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:09 np0005481680 nova_compute[264665]: 2025-10-12 21:27:09.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:09 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:09Z|00064|binding|INFO|Releasing lport 86665791-7039-410b-9eb5-d9af056770a9 from this chassis (sb_readonly=0)
Oct 12 17:27:09 np0005481680 nova_compute[264665]: 2025-10-12 21:27:09.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:09 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c840032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.195 2 DEBUG nova.compute.manager [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-changed-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.196 2 DEBUG nova.compute.manager [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Refreshing instance network info cache due to event network-changed-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.197 2 DEBUG oslo_concurrency.lockutils [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.197 2 DEBUG oslo_concurrency.lockutils [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.197 2 DEBUG nova.network.neutron [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Refreshing network info cache for port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:27:10 np0005481680 nova_compute[264665]: 2025-10-12 21:27:10.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:10 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212710 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:27:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:10.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:10 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 12 17:27:11 np0005481680 podman[277525]: 2025-10-12 21:27:11.130883885 +0000 UTC m=+0.080072410 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 12 17:27:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:11 np0005481680 nova_compute[264665]: 2025-10-12 21:27:11.406 2 DEBUG nova.network.neutron [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updated VIF entry in instance network info cache for port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:27:11 np0005481680 nova_compute[264665]: 2025-10-12 21:27:11.407 2 DEBUG nova.network.neutron [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updating instance_info_cache with network_info: [{"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:27:11 np0005481680 nova_compute[264665]: 2025-10-12 21:27:11.427 2 DEBUG oslo_concurrency.lockutils [req-5ce65fae-1e5e-4ab7-a5c7-d4957d67a8af req-7e389203-1fb0-4701-9588-056017097d41 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-962a5d4f-4210-48cd-bfa7-d21430a1ad67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:27:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:11 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0036d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:12 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:12.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:12 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Oct 12 17:27:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:13 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:14 np0005481680 nova_compute[264665]: 2025-10-12 21:27:14.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:14 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0036d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:14 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Oct 12 17:27:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:15 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:15 np0005481680 nova_compute[264665]: 2025-10-12 21:27:15.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:16 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:16 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:16 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c8c0036d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:27:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:17.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:27:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:17.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:27:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:17.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:27:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:17.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:17 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:27:18
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', '.nfs', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:27:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:27:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:27:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:18.364 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:18.365 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:18 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:18 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108131674480693 of space, bias 1.0, pg target 0.3324395023442079 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:27:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:27:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:27:19 np0005481680 nova_compute[264665]: 2025-10-12 21:27:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:27:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:19.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:27:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:19 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:19 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:27:20 np0005481680 nova_compute[264665]: 2025-10-12 21:27:20.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:20 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:20Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:65:30 10.100.0.11
Oct 12 17:27:20 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:20Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:65:30 10.100.0.11
Oct 12 17:27:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:27:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:27:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:20 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:20 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 12 17:27:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:21 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cac001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:22] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:22] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:22 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:22 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:22 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Oct 12 17:27:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:23 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:27:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:23 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:27:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:23 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:24 np0005481680 podman[277669]: 2025-10-12 21:27:24.148029733 +0000 UTC m=+0.095795223 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:27:24 np0005481680 nova_compute[264665]: 2025-10-12 21:27:24.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:24 np0005481680 podman[277670]: 2025-10-12 21:27:24.230476531 +0000 UTC m=+0.176648541 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller)
Oct 12 17:27:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:24 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cac002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:24.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:24 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:27:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:27:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:25 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:25 np0005481680 nova_compute[264665]: 2025-10-12 21:27:25.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:25 np0005481680 nova_compute[264665]: 2025-10-12 21:27:25.848 2 INFO nova.compute.manager [None req-ad17385e-a732-479b-a55a-c0cfc40ef870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Get console output#033[00m
Oct 12 17:27:25 np0005481680 nova_compute[264665]: 2025-10-12 21:27:25.858 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.227 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.228 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.229 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.230 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.230 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.233 2 INFO nova.compute.manager [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Terminating instance#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.235 2 DEBUG nova.compute.manager [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:27:26 np0005481680 kernel: tap53bf1e1d-d5 (unregistering): left promiscuous mode
Oct 12 17:27:26 np0005481680 NetworkManager[44859]: <info>  [1760304446.3435] device (tap53bf1e1d-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:27:26 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:26Z|00065|binding|INFO|Releasing lport 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 from this chassis (sb_readonly=0)
Oct 12 17:27:26 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:26Z|00066|binding|INFO|Setting lport 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 down in Southbound
Oct 12 17:27:26 np0005481680 ovn_controller[154617]: 2025-10-12T21:27:26Z|00067|binding|INFO|Removing iface tap53bf1e1d-d5 ovn-installed in OVS
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:26 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:26.370 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:65:30 10.100.0.11'], port_security=['fa:16:3e:80:65:30 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '962a5d4f-4210-48cd-bfa7-d21430a1ad67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3baff2e-9660-47cd-b25c-715893014d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '4', 'neutron:security_group_ids': '598bfc91-f974-4495-8850-14bbbd2703ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=257702e8-53ce-4ab0-a540-fa67b1e3dfab, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:27:26 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:26.372 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 in datapath a3baff2e-9660-47cd-b25c-715893014d3c unbound from our chassis#033[00m
Oct 12 17:27:26 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:26.374 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3baff2e-9660-47cd-b25c-715893014d3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:27:26 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:26.376 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[5e83ab7f-f2b7-4e87-b4ef-75969a637652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:26 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:26.376 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c namespace which is not needed anymore#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:26 np0005481680 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 12 17:27:26 np0005481680 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 13.683s CPU time.
Oct 12 17:27:26 np0005481680 systemd-machined[218338]: Machine qemu-3-instance-00000005 terminated.
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.684 2 INFO nova.virt.libvirt.driver [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Instance destroyed successfully.#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.684 2 DEBUG nova.objects.instance [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid 962a5d4f-4210-48cd-bfa7-d21430a1ad67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:27:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:26 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.705 2 DEBUG nova.virt.libvirt.vif [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629377975',display_name='tempest-TestNetworkBasicOps-server-629377975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629377975',id=5,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGbOMbhtURWz1LS3xDbelbt7uQkXcyZbn82/PMQq5agiJyDDLH1vN7lW01aAmEye4czjO03wXd2UKKnep63VO5NJSge2ooJydZLuTs3bAgJwWPzKFup6mSurGYYMAA8R9A==',key_name='tempest-TestNetworkBasicOps-379595566',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:27:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-mllk8rww',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:27:06Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=962a5d4f-4210-48cd-bfa7-d21430a1ad67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.706 2 DEBUG nova.network.os_vif_util [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "address": "fa:16:3e:80:65:30", "network": {"id": "a3baff2e-9660-47cd-b25c-715893014d3c", "bridge": "br-int", "label": "tempest-network-smoke--1930040967", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53bf1e1d-d5", "ovs_interfaceid": "53bf1e1d-d55c-4c25-9bc8-45ac20b479a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.707 2 DEBUG nova.network.os_vif_util [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:27:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:26 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.709 2 DEBUG os_vif [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53bf1e1d-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.721 2 INFO os_vif [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:65:30,bridge_name='br-int',has_traffic_filtering=True,id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1,network=Network(a3baff2e-9660-47cd-b25c-715893014d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53bf1e1d-d5')#033[00m
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:27:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:27:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:26 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cac002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.890 2 DEBUG nova.compute.manager [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-unplugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.890 2 DEBUG oslo_concurrency.lockutils [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.891 2 DEBUG oslo_concurrency.lockutils [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.891 2 DEBUG oslo_concurrency.lockutils [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.892 2 DEBUG nova.compute.manager [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] No waiting events found dispatching network-vif-unplugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:27:26 np0005481680 nova_compute[264665]: 2025-10-12 21:27:26.892 2 DEBUG nova.compute.manager [req-7bbb2467-6e6b-437c-8c3a-6711caf5dcc3 req-7905fc92-1515-4010-b1cc-b42c90703433 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-unplugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 12 17:27:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:27:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:27.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:27:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:27.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:27 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:28 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c84003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:27:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:28.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:27:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:28 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c9c0030e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.997 2 DEBUG nova.compute.manager [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.997 2 DEBUG oslo_concurrency.lockutils [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.998 2 DEBUG oslo_concurrency.lockutils [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.998 2 DEBUG oslo_concurrency.lockutils [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.998 2 DEBUG nova.compute.manager [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] No waiting events found dispatching network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:27:28 np0005481680 nova_compute[264665]: 2025-10-12 21:27:28.999 2 WARNING nova.compute.manager [req-d56c1b94-1033-450e-9d1d-a3b0cb69bc4a req-48edd076-7ffa-4ae9-a2de-d8c496a77597 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received unexpected event network-vif-plugged-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 for instance with vm_state active and task_state deleting.#033[00m
Oct 12 17:27:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 12 17:27:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:29.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:29 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cac002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:27:29 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [NOTICE]   (277509) : haproxy version is 2.8.14-c23fe91
Oct 12 17:27:29 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [NOTICE]   (277509) : path to executable is /usr/sbin/haproxy
Oct 12 17:27:29 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [WARNING]  (277509) : Exiting Master process...
Oct 12 17:27:29 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [ALERT]    (277509) : Current worker (277511) exited with code 143 (Terminated)
Oct 12 17:27:29 np0005481680 neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c[277505]: [WARNING]  (277509) : All workers exited. Exiting... (0)
Oct 12 17:27:29 np0005481680 systemd[1]: libpod-438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0.scope: Deactivated successfully.
Oct 12 17:27:29 np0005481680 podman[277741]: 2025-10-12 21:27:29.572562748 +0000 UTC m=+3.031099403 container died 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 12 17:27:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:30 np0005481680 nova_compute[264665]: 2025-10-12 21:27:30.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:27:30 np0005481680 kernel: ganesha.nfsd[276934]: segfault at 50 ip 00007f1d5b7bd32e sp 00007f1d297f9210 error 4 in libntirpc.so.5.8[7f1d5b7a2000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 12 17:27:30 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:27:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[276787]: 12/10/2025 21:27:30 : epoch 68ec1d0a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c90003b40 fd 38 proxy ignored for local
Oct 12 17:27:30 np0005481680 systemd[1]: Started Process Core Dump (PID 277812/UID 0).
Oct 12 17:27:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:31 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:27:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Oct 12 17:27:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0-userdata-shm.mount: Deactivated successfully.
Oct 12 17:27:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-183bc4ab4dfa7cc15132ea43d33d09efefdb2b6c62d0c91090b55d5931d3543c-merged.mount: Deactivated successfully.
Oct 12 17:27:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:31.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:31 np0005481680 nova_compute[264665]: 2025-10-12 21:27:31.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:32] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:32] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:32 np0005481680 podman[277789]: 2025-10-12 21:27:32.083022669 +0000 UTC m=+2.481062787 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:27:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:32.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 104 KiB/s wr, 24 op/s
Oct 12 17:27:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:33.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:34.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 139 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 104 KiB/s wr, 30 op/s
Oct 12 17:27:35 np0005481680 systemd-coredump[277813]: Process 276791 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007f1d5b7bd32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:27:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:27:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:27:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:35 np0005481680 systemd[1]: systemd-coredump@14-277812-0.service: Deactivated successfully.
Oct 12 17:27:35 np0005481680 systemd[1]: systemd-coredump@14-277812-0.service: Consumed 1.348s CPU time.
Oct 12 17:27:35 np0005481680 nova_compute[264665]: 2025-10-12 21:27:35.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:35 np0005481680 nova_compute[264665]: 2025-10-12 21:27:35.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:27:36 np0005481680 podman[277741]: 2025-10-12 21:27:36.296559802 +0000 UTC m=+9.755096457 container cleanup 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:27:36 np0005481680 systemd[1]: libpod-conmon-438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0.scope: Deactivated successfully.
Oct 12 17:27:36 np0005481680 podman[277832]: 2025-10-12 21:27:36.361029199 +0000 UTC m=+0.782558590 container died c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:36 np0005481680 nova_compute[264665]: 2025-10-12 21:27:36.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:36 np0005481680 nova_compute[264665]: 2025-10-12 21:27:36.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:36 np0005481680 nova_compute[264665]: 2025-10-12 21:27:36.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:36 np0005481680 nova_compute[264665]: 2025-10-12 21:27:36.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:36.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-cdae12985c54986ac32a576053f044bb69f037c15fde475f55d7cc68015ebb6f-merged.mount: Deactivated successfully.
Oct 12 17:27:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 139 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 13 KiB/s wr, 14 op/s
Oct 12 17:27:37 np0005481680 podman[277832]: 2025-10-12 21:27:37.195639076 +0000 UTC m=+1.617168427 container remove c692293336145b4492f928f77e93a2fdebe6d559f938d35f52858d69c8abac79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 12 17:27:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:37.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:27:37 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:27:37 np0005481680 podman[277896]: 2025-10-12 21:27:37.376957075 +0000 UTC m=+1.035166901 container remove 438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.388 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[0f905305-8bcf-429d-8c71-d1c7e679be04]: (4, ('Sun Oct 12 09:27:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c (438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0)\n438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0\nSun Oct 12 09:27:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c (438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0)\n438b440649438647ba1570bb591116621d134b7224fc9dc801fb9e49b1a858f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.390 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0fe8a3-5e74-4405-8df0-91b62f733a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.391 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3baff2e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:37 np0005481680 kernel: tapa3baff2e-90: left promiscuous mode
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.435 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ec0526-ac2f-49d5-8e8a-7019948d2010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:37 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.454 2 INFO nova.virt.libvirt.driver [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Deleting instance files /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67_del#033[00m
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.455 2 INFO nova.virt.libvirt.driver [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Deletion of /var/lib/nova/instances/962a5d4f-4210-48cd-bfa7-d21430a1ad67_del complete#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.463 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[80099ff8-08e8-4968-af70-709189977420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.465 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2247af-cf54-4bcc-b154-fda5158e659e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.484 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[665c776c-2835-449b-9e2f-1ab1b10f5d37]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409219, 'reachable_time': 15062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277980, 'error': None, 'target': 'ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.488 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3baff2e-9660-47cd-b25c-715893014d3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:27:37 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:27:37.488 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8f8aa3-07cd-4d03-a1dd-614c7a50a287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:27:37 np0005481680 systemd[1]: run-netns-ovnmeta\x2da3baff2e\x2d9660\x2d47cd\x2db25c\x2d715893014d3c.mount: Deactivated successfully.
Oct 12 17:27:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:37 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:27:37 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.099s CPU time.
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.628500339 +0000 UTC m=+0.094267974 container create 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.569670737 +0000 UTC m=+0.035438372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:27:37 np0005481680 nova_compute[264665]: 2025-10-12 21:27:37.665 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:37 np0005481680 systemd[1]: Started libpod-conmon-5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910.scope.
Oct 12 17:27:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.829217037 +0000 UTC m=+0.294984672 container init 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.844154911 +0000 UTC m=+0.309922546 container start 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:27:37 np0005481680 amazing_hopper[278008]: 167 167
Oct 12 17:27:37 np0005481680 systemd[1]: libpod-5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910.scope: Deactivated successfully.
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.867838959 +0000 UTC m=+0.333606594 container attach 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:27:37 np0005481680 podman[277986]: 2025-10-12 21:27:37.86981502 +0000 UTC m=+0.335582665 container died 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:27:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-dc8d46b08fc3f892a0cc39d8a8d0662ed7e622e668930c6d5d0e72520eb06efc-merged.mount: Deactivated successfully.
Oct 12 17:27:38 np0005481680 podman[277986]: 2025-10-12 21:27:38.038658808 +0000 UTC m=+0.504426433 container remove 5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:27:38 np0005481680 systemd[1]: libpod-conmon-5f16fb84035d2a36c74be2fe40978145ab950e8c771ceb1f27f218db5dcde910.scope: Deactivated successfully.
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.181 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.182 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.183 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.183 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.183 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.280 2 INFO nova.compute.manager [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Took 12.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.282 2 DEBUG oslo.service.loopingcall [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.283 2 DEBUG nova.compute.manager [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.283 2 DEBUG nova.network.neutron [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.316905569 +0000 UTC m=+0.099942259 container create cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.263594039 +0000 UTC m=+0.046630759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:38 np0005481680 systemd[1]: Started libpod-conmon-cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274.scope.
Oct 12 17:27:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.513124111 +0000 UTC m=+0.296160801 container init cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.522946623 +0000 UTC m=+0.305983303 container start cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.556996799 +0000 UTC m=+0.340033489 container attach cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:27:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:27:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489244482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:27:38 np0005481680 nova_compute[264665]: 2025-10-12 21:27:38.701 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212738 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:27:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:38.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:38 np0005481680 frosty_feistel[278071]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:27:38 np0005481680 frosty_feistel[278071]: --> All data devices are unavailable
Oct 12 17:27:38 np0005481680 systemd[1]: libpod-cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274.scope: Deactivated successfully.
Oct 12 17:27:38 np0005481680 podman[278036]: 2025-10-12 21:27:38.87186493 +0000 UTC m=+0.654901590 container died cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-14abee0dbc172829dacf42e8b3d2214bf90ea72a15867965daee60c26bce78b4-merged.mount: Deactivated successfully.
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.008 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.012 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=59.94255065917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.012 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.012 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 139 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 13 KiB/s wr, 14 op/s
Oct 12 17:27:39 np0005481680 podman[278036]: 2025-10-12 21:27:39.086574787 +0000 UTC m=+0.869611447 container remove cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:27:39 np0005481680 systemd[1]: libpod-conmon-cd7a0e9b6a1ca8660f4ba72a586d66d64b48e8fd8d9ea7e30a00a9b0454f8274.scope: Deactivated successfully.
Oct 12 17:27:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.580 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance 962a5d4f-4210-48cd-bfa7-d21430a1ad67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.581 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.581 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:27:39 np0005481680 nova_compute[264665]: 2025-10-12 21:27:39.614 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:39 np0005481680 podman[278216]: 2025-10-12 21:27:39.963729407 +0000 UTC m=+0.119107412 container create b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:27:39 np0005481680 podman[278216]: 2025-10-12 21:27:39.885843956 +0000 UTC m=+0.041222011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:40 np0005481680 systemd[1]: Started libpod-conmon-b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a.scope.
Oct 12 17:27:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:27:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136994327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:27:40 np0005481680 podman[278216]: 2025-10-12 21:27:40.125624837 +0000 UTC m=+0.281002882 container init b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:40 np0005481680 podman[278216]: 2025-10-12 21:27:40.134553056 +0000 UTC m=+0.289931061 container start b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:27:40 np0005481680 awesome_hopper[278233]: 167 167
Oct 12 17:27:40 np0005481680 systemd[1]: libpod-b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a.scope: Deactivated successfully.
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.143 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.153 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:27:40 np0005481680 podman[278216]: 2025-10-12 21:27:40.169995478 +0000 UTC m=+0.325373523 container attach b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 12 17:27:40 np0005481680 podman[278216]: 2025-10-12 21:27:40.171832625 +0000 UTC m=+0.327210630 container died b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:27:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f4bc11299494746b5053be6c61664c5b64e1c926db944270bf606d91b737e3c7-merged.mount: Deactivated successfully.
Oct 12 17:27:40 np0005481680 podman[278216]: 2025-10-12 21:27:40.435900961 +0000 UTC m=+0.591278966 container remove b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:27:40 np0005481680 systemd[1]: libpod-conmon-b31a4297ce6797bb94f27c87ecfb2891511ec6519788e59cf7ba54952baa177a.scope: Deactivated successfully.
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.508 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212740 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:27:40 np0005481680 podman[278261]: 2025-10-12 21:27:40.76161086 +0000 UTC m=+0.114176655 container create 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.781 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:27:40 np0005481680 nova_compute[264665]: 2025-10-12 21:27:40.782 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:40 np0005481680 podman[278261]: 2025-10-12 21:27:40.696387324 +0000 UTC m=+0.048953169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:40.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:40 np0005481680 systemd[1]: Started libpod-conmon-2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a.scope.
Oct 12 17:27:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3789c38c3cb2b49cf0550a695cda0c8af1626e5c96729c165e96e0a999654ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3789c38c3cb2b49cf0550a695cda0c8af1626e5c96729c165e96e0a999654ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3789c38c3cb2b49cf0550a695cda0c8af1626e5c96729c165e96e0a999654ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:40 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3789c38c3cb2b49cf0550a695cda0c8af1626e5c96729c165e96e0a999654ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:40 np0005481680 podman[278261]: 2025-10-12 21:27:40.938235319 +0000 UTC m=+0.290801144 container init 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 12 17:27:40 np0005481680 podman[278261]: 2025-10-12 21:27:40.95072289 +0000 UTC m=+0.303288685 container start 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 17:27:40 np0005481680 podman[278261]: 2025-10-12 21:27:40.984120088 +0000 UTC m=+0.336685873 container attach 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]: {
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:    "0": [
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:        {
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "devices": [
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "/dev/loop3"
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            ],
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "lv_name": "ceph_lv0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "lv_size": "21470642176",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "name": "ceph_lv0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "tags": {
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.cluster_name": "ceph",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.crush_device_class": "",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.encrypted": "0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.osd_id": "0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.type": "block",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.vdo": "0",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:                "ceph.with_tpm": "0"
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            },
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "type": "block",
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:            "vg_name": "ceph_vg0"
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:        }
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]:    ]
Oct 12 17:27:41 np0005481680 xenodochial_bouman[278277]: }
Oct 12 17:27:41 np0005481680 systemd[1]: libpod-2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a.scope: Deactivated successfully.
Oct 12 17:27:41 np0005481680 podman[278261]: 2025-10-12 21:27:41.285993966 +0000 UTC m=+0.638559761 container died 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:27:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f3789c38c3cb2b49cf0550a695cda0c8af1626e5c96729c165e96e0a999654ef-merged.mount: Deactivated successfully.
Oct 12 17:27:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:41 np0005481680 podman[278261]: 2025-10-12 21:27:41.562184803 +0000 UTC m=+0.914750588 container remove 2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 17:27:41 np0005481680 systemd[1]: libpod-conmon-2fd7491f5043fef7737bb1b4a722d1b5313021ca2d59ce5420d64666c082891a.scope: Deactivated successfully.
Oct 12 17:27:41 np0005481680 podman[278289]: 2025-10-12 21:27:41.580877763 +0000 UTC m=+0.240327666 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.654 2 DEBUG nova.network.neutron [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.681 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304446.6785138, 962a5d4f-4210-48cd-bfa7-d21430a1ad67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.681 2 INFO nova.compute.manager [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.777 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.779 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.779 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:27:41 np0005481680 nova_compute[264665]: 2025-10-12 21:27:41.779 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:27:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:42] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 12 17:27:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:42] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.201 2 DEBUG nova.compute.manager [req-469cbdc3-f41b-4420-beaa-bfd06b718196 req-f44db880-0ecc-45e5-8c71-fa6e78b01fbd 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Received event network-vif-deleted-53bf1e1d-d55c-4c25-9bc8-45ac20b479a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.203 2 INFO nova.compute.manager [req-469cbdc3-f41b-4420-beaa-bfd06b718196 req-f44db880-0ecc-45e5-8c71-fa6e78b01fbd 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Neutron deleted interface 53bf1e1d-d55c-4c25-9bc8-45ac20b479a1; detaching it from the instance and deleting it from the info cache#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.203 2 DEBUG nova.network.neutron [req-469cbdc3-f41b-4420-beaa-bfd06b718196 req-f44db880-0ecc-45e5-8c71-fa6e78b01fbd 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.226 2 DEBUG nova.compute.manager [None req-4955aba6-e17a-4a23-9e08-1a542d606c44 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.234 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.235 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.235 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.256 2 INFO nova.compute.manager [-] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Took 3.97 seconds to deallocate network for instance.#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.267 2 DEBUG nova.compute.manager [req-469cbdc3-f41b-4420-beaa-bfd06b718196 req-f44db880-0ecc-45e5-8c71-fa6e78b01fbd 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 962a5d4f-4210-48cd-bfa7-d21430a1ad67] Detach interface failed, port_id=53bf1e1d-d55c-4c25-9bc8-45ac20b479a1, reason: Instance 962a5d4f-4210-48cd-bfa7-d21430a1ad67 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.346 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.347 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.398 2 DEBUG oslo_concurrency.processutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.489376598 +0000 UTC m=+0.123486204 container create 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.409401943 +0000 UTC m=+0.043511589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:42 np0005481680 systemd[1]: Started libpod-conmon-09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485.scope.
Oct 12 17:27:42 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.692275132 +0000 UTC m=+0.326384748 container init 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.706705923 +0000 UTC m=+0.340815529 container start 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:27:42 np0005481680 boring_archimedes[278430]: 167 167
Oct 12 17:27:42 np0005481680 systemd[1]: libpod-09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485.scope: Deactivated successfully.
Oct 12 17:27:42 np0005481680 conmon[278430]: conmon 09c75238636783035afc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485.scope/container/memory.events
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.724244524 +0000 UTC m=+0.358354170 container attach 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:42 np0005481680 podman[278413]: 2025-10-12 21:27:42.724766337 +0000 UTC m=+0.358875933 container died 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:27:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a20c6db58337924dcdf2d39960976c741809ff5d94a84f25106a6a17433395c3-merged.mount: Deactivated successfully.
Oct 12 17:27:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:27:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445807103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.947 2 DEBUG oslo_concurrency.processutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.956 2 DEBUG nova.compute.provider_tree [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:27:42 np0005481680 nova_compute[264665]: 2025-10-12 21:27:42.989 2 DEBUG nova.scheduler.client.report [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:27:43 np0005481680 podman[278413]: 2025-10-12 21:27:43.042218785 +0000 UTC m=+0.676328351 container remove 09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_archimedes, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:27:43 np0005481680 systemd[1]: libpod-conmon-09c75238636783035afc8b8e9ade41df529d6d6e6aca5a69fcf44f999c38b485.scope: Deactivated successfully.
Oct 12 17:27:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Oct 12 17:27:43 np0005481680 nova_compute[264665]: 2025-10-12 21:27:43.065 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:43 np0005481680 nova_compute[264665]: 2025-10-12 21:27:43.131 2 INFO nova.scheduler.client.report [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance 962a5d4f-4210-48cd-bfa7-d21430a1ad67#033[00m
Oct 12 17:27:43 np0005481680 nova_compute[264665]: 2025-10-12 21:27:43.226 2 DEBUG oslo_concurrency.lockutils [None req-783ef432-8221-421a-b87e-865976ce354a 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "962a5d4f-4210-48cd-bfa7-d21430a1ad67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:27:43 np0005481680 podman[278503]: 2025-10-12 21:27:43.309286198 +0000 UTC m=+0.097015104 container create 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:27:43 np0005481680 podman[278503]: 2025-10-12 21:27:43.258923724 +0000 UTC m=+0.046652690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:43 np0005481680 systemd[1]: Started libpod-conmon-793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e.scope.
Oct 12 17:27:43 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:27:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85faece0a7d2ad0de89b9bfbf4de61bd4ff85c8e33a4ba57012d0bd6d2f04456/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85faece0a7d2ad0de89b9bfbf4de61bd4ff85c8e33a4ba57012d0bd6d2f04456/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85faece0a7d2ad0de89b9bfbf4de61bd4ff85c8e33a4ba57012d0bd6d2f04456/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:43 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85faece0a7d2ad0de89b9bfbf4de61bd4ff85c8e33a4ba57012d0bd6d2f04456/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:43 np0005481680 podman[278503]: 2025-10-12 21:27:43.503998441 +0000 UTC m=+0.291727407 container init 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:27:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:43 np0005481680 podman[278503]: 2025-10-12 21:27:43.516499582 +0000 UTC m=+0.304228488 container start 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct 12 17:27:43 np0005481680 podman[278503]: 2025-10-12 21:27:43.570262924 +0000 UTC m=+0.357991890 container attach 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 17:27:44 np0005481680 lvm[278594]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:27:44 np0005481680 lvm[278594]: VG ceph_vg0 finished
Oct 12 17:27:44 np0005481680 magical_greider[278519]: {}
Oct 12 17:27:44 np0005481680 systemd[1]: libpod-793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e.scope: Deactivated successfully.
Oct 12 17:27:44 np0005481680 podman[278503]: 2025-10-12 21:27:44.446043029 +0000 UTC m=+1.233771945 container died 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:44 np0005481680 systemd[1]: libpod-793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e.scope: Consumed 1.644s CPU time.
Oct 12 17:27:44 np0005481680 systemd[1]: var-lib-containers-storage-overlay-85faece0a7d2ad0de89b9bfbf4de61bd4ff85c8e33a4ba57012d0bd6d2f04456-merged.mount: Deactivated successfully.
Oct 12 17:27:44 np0005481680 podman[278503]: 2025-10-12 21:27:44.777033853 +0000 UTC m=+1.564762769 container remove 793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_greider, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:27:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:44.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:44 np0005481680 systemd[1]: libpod-conmon-793b40b2c74ee395ff7dad6f6b8e3057d387c7aca24b10460bb2c7f21738349e.scope: Deactivated successfully.
Oct 12 17:27:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:27:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:27:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Oct 12 17:27:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 12 17:27:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:45.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 12 17:27:45 np0005481680 nova_compute[264665]: 2025-10-12 21:27:45.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:45 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:45 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:27:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:46.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:46 np0005481680 nova_compute[264665]: 2025-10-12 21:27:46.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 14 op/s
Oct 12 17:27:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:47.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:27:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:47.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:47 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 15.
Oct 12 17:27:47 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:27:47 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.099s CPU time.
Oct 12 17:27:47 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:27:48 np0005481680 podman[278691]: 2025-10-12 21:27:48.083557661 +0000 UTC m=+0.040330287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:27:48 np0005481680 podman[278691]: 2025-10-12 21:27:48.270616058 +0000 UTC m=+0.227388634 container create abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:27:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:27:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:27:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc3ffc6508a98e6217118de84afd26ac5f463dffbf1e79febb90a2ba345143/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc3ffc6508a98e6217118de84afd26ac5f463dffbf1e79febb90a2ba345143/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc3ffc6508a98e6217118de84afd26ac5f463dffbf1e79febb90a2ba345143/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccc3ffc6508a98e6217118de84afd26ac5f463dffbf1e79febb90a2ba345143/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:27:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:27:48 np0005481680 podman[278691]: 2025-10-12 21:27:48.483895509 +0000 UTC m=+0.440668095 container init abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:27:48 np0005481680 podman[278691]: 2025-10-12 21:27:48.49523436 +0000 UTC m=+0.452006926 container start abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:27:48 np0005481680 bash[278691]: abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b
Oct 12 17:27:48 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:27:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:48 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:27:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:48.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 14 op/s
Oct 12 17:27:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:49.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:50 np0005481680 nova_compute[264665]: 2025-10-12 21:27:50.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:50.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Oct 12 17:27:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:51.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:51 np0005481680 nova_compute[264665]: 2025-10-12 21:27:51.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:52] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:27:52] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:27:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:52.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 12 17:27:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:27:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:53.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:27:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:54 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:27:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:27:54 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:27:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:27:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 12 17:27:55 np0005481680 podman[278754]: 2025-10-12 21:27:55.138163671 +0000 UTC m=+0.097311131 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 12 17:27:55 np0005481680 podman[278755]: 2025-10-12 21:27:55.171502228 +0000 UTC m=+0.126727188 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 12 17:27:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:55.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:55 np0005481680 nova_compute[264665]: 2025-10-12 21:27:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:56 np0005481680 nova_compute[264665]: 2025-10-12 21:27:56.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:56 np0005481680 nova_compute[264665]: 2025-10-12 21:27:56.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:56 np0005481680 nova_compute[264665]: 2025-10-12 21:27:56.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:27:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 12 17:27:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:27:57.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:27:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:57.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:27:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 12 17:27:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:27:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:27:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:27:59.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:27:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:00 np0005481680 nova_compute[264665]: 2025-10-12 21:28:00.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:28:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:00 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:28:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Oct 12 17:28:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:01 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3494000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:01.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:01 np0005481680 nova_compute[264665]: 2025-10-12 21:28:01.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:02] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:28:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:02] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:28:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:02 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:02 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3470000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:28:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:28:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:28:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:03 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f346c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:03.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212804 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:28:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:04 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3478000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:04 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 12 17:28:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:05 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f34700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:05.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:05 np0005481680 nova_compute[264665]: 2025-10-12 21:28:05.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:06 np0005481680 podman[278853]: 2025-10-12 21:28:06.138209347 +0000 UTC m=+0.098579865 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 12 17:28:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:06 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f346c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:06 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3478001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:06.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:06 np0005481680 nova_compute[264665]: 2025-10-12 21:28:06.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:28:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:07.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:07 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:08 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f34700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:08 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f346c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:28:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:28:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 12 17:28:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:09 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3478001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:10 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:10 np0005481680 nova_compute[264665]: 2025-10-12 21:28:10.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:10 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f34700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:10.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 426 B/s wr, 173 op/s
Oct 12 17:28:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:11 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f346c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:11.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:11 np0005481680 nova_compute[264665]: 2025-10-12 21:28:11.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:12] "GET /metrics HTTP/1.1" 200 48375 "" "Prometheus/2.51.0"
Oct 12 17:28:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:12] "GET /metrics HTTP/1.1" 200 48375 "" "Prometheus/2.51.0"
Oct 12 17:28:12 np0005481680 podman[278878]: 2025-10-12 21:28:12.119041942 +0000 UTC m=+0.077840289 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 12 17:28:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:12 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3478001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:12 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Oct 12 17:28:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:13 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:13.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:14 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3470002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:14 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3478002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 85 B/s wr, 179 op/s
Oct 12 17:28:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:15 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f346c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:15.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:15 np0005481680 nova_compute[264665]: 2025-10-12 21:28:15.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[278706]: 12/10/2025 21:28:16 : epoch 68ec1d54 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3488001c00 fd 38 proxy ignored for local
Oct 12 17:28:16 np0005481680 kernel: ganesha.nfsd[278821]: segfault at 50 ip 00007f354494732e sp 00007f35057f9210 error 4 in libntirpc.so.5.8[7f354492c000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 12 17:28:16 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:28:16 np0005481680 systemd[1]: Started Process Core Dump (PID 278901/UID 0).
Oct 12 17:28:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:16 np0005481680 nova_compute[264665]: 2025-10-12 21:28:16.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 85 B/s wr, 178 op/s
Oct 12 17:28:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:17.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:28:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:17.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:17.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:18 np0005481680 systemd-coredump[278902]: Process 278710 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 53:#012#0  0x00007f354494732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:28:18
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.nfs', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images']
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:28:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:28:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:28:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:18.365 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:28:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:28:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:28:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:18.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:28:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:28:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 85 B/s wr, 178 op/s
Oct 12 17:28:19 np0005481680 systemd[1]: systemd-coredump@15-278901-0.service: Deactivated successfully.
Oct 12 17:28:19 np0005481680 systemd[1]: systemd-coredump@15-278901-0.service: Consumed 1.361s CPU time.
Oct 12 17:28:19 np0005481680 podman[278910]: 2025-10-12 21:28:19.513678231 +0000 UTC m=+0.049728581 container died abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:28:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4ccc3ffc6508a98e6217118de84afd26ac5f463dffbf1e79febb90a2ba345143-merged.mount: Deactivated successfully.
Oct 12 17:28:19 np0005481680 podman[278910]: 2025-10-12 21:28:19.566515721 +0000 UTC m=+0.102566001 container remove abdb402b36f756b90185964fc044c77a23a05c032621c693eae0e3ef79b0762b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:28:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:28:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:19.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:28:19 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.820s CPU time.
Oct 12 17:28:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:20 np0005481680 nova_compute[264665]: 2025-10-12 21:28:20.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Oct 12 17:28:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:21.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:21 np0005481680 nova_compute[264665]: 2025-10-12 21:28:21.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:22] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Oct 12 17:28:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:22] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Oct 12 17:28:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 12 17:28:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212824 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:28:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 12 17:28:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:25.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:25 np0005481680 nova_compute[264665]: 2025-10-12 21:28:25.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:26 np0005481680 podman[278986]: 2025-10-12 21:28:26.141974074 +0000 UTC m=+0.101001571 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 12 17:28:26 np0005481680 podman[278987]: 2025-10-12 21:28:26.195931262 +0000 UTC m=+0.149566991 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 12 17:28:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:26 np0005481680 nova_compute[264665]: 2025-10-12 21:28:26.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 12 17:28:27 np0005481680 nova_compute[264665]: 2025-10-12 21:28:27.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:27 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:27.122 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:28:27 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:27.123 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:28:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:27.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:28.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 12 17:28:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:30 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 16.
Oct 12 17:28:30 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:28:30 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 1.820s CPU time.
Oct 12 17:28:30 np0005481680 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5...
Oct 12 17:28:30 np0005481680 podman[279086]: 2025-10-12 21:28:30.374418057 +0000 UTC m=+0.067675570 container create b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:28:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12ccf33d29f9c2b16c99e5b10cdec9c41fc71b189dacb2c7d16b59d98fa01da/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12ccf33d29f9c2b16c99e5b10cdec9c41fc71b189dacb2c7d16b59d98fa01da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12ccf33d29f9c2b16c99e5b10cdec9c41fc71b189dacb2c7d16b59d98fa01da/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12ccf33d29f9c2b16c99e5b10cdec9c41fc71b189dacb2c7d16b59d98fa01da/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.hypubd-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:30 np0005481680 podman[279086]: 2025-10-12 21:28:30.346950165 +0000 UTC m=+0.040207728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:30 np0005481680 podman[279086]: 2025-10-12 21:28:30.466759945 +0000 UTC m=+0.160017468 container init b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 12 17:28:30 np0005481680 podman[279086]: 2025-10-12 21:28:30.475795467 +0000 UTC m=+0.169052990 container start b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:28:30 np0005481680 bash[279086]: b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f
Oct 12 17:28:30 np0005481680 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 12 17:28:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 12 17:28:30 np0005481680 nova_compute[264665]: 2025-10-12 21:28:30.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:30.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 12 17:28:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:31 np0005481680 nova_compute[264665]: 2025-10-12 21:28:31.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:32] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Oct 12 17:28:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:32] "GET /metrics HTTP/1.1" 200 48377 "" "Prometheus/2.51.0"
Oct 12 17:28:32 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:28:32.126 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:28:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:28:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:28:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:28:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:33.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:34 np0005481680 nova_compute[264665]: 2025-10-12 21:28:34.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:34 np0005481680 nova_compute[264665]: 2025-10-12 21:28:34.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.850053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514850103, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1541, "num_deletes": 255, "total_data_size": 2993499, "memory_usage": 3026912, "flush_reason": "Manual Compaction"}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 12 17:28:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514864021, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2878304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26783, "largest_seqno": 28323, "table_properties": {"data_size": 2871210, "index_size": 4101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14828, "raw_average_key_size": 19, "raw_value_size": 2856950, "raw_average_value_size": 3774, "num_data_blocks": 180, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304370, "oldest_key_time": 1760304370, "file_creation_time": 1760304514, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14068 microseconds, and 7015 cpu microseconds.
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.864131) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2878304 bytes OK
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.864159) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.866283) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.866304) EVENT_LOG_v1 {"time_micros": 1760304514866297, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.866327) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2986873, prev total WAL file size 2986873, number of live WAL files 2.
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.867686) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2810KB)], [59(13MB)]
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514867736, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16971008, "oldest_snapshot_seqno": -1}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6004 keys, 16818828 bytes, temperature: kUnknown
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514990982, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16818828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16775631, "index_size": 27087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 152780, "raw_average_key_size": 25, "raw_value_size": 16664423, "raw_average_value_size": 2775, "num_data_blocks": 1113, "num_entries": 6004, "num_filter_entries": 6004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304514, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.991343) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16818828 bytes
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.992895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.6 rd, 136.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.4 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 6532, records dropped: 528 output_compression: NoCompression
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.992923) EVENT_LOG_v1 {"time_micros": 1760304514992910, "job": 32, "event": "compaction_finished", "compaction_time_micros": 123378, "compaction_time_cpu_micros": 60405, "output_level": 6, "num_output_files": 1, "total_output_size": 16818828, "num_input_records": 6532, "num_output_records": 6004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514993898, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304514998656, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.867617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.998753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.998761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.998764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.998768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:34 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:28:34.998771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:28:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Oct 12 17:28:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:35.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:35 np0005481680 nova_compute[264665]: 2025-10-12 21:28:35.678 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:35 np0005481680 nova_compute[264665]: 2025-10-12 21:28:35.678 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 12 17:28:35 np0005481680 nova_compute[264665]: 2025-10-12 21:28:35.696 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 12 17:28:35 np0005481680 nova_compute[264665]: 2025-10-12 21:28:35.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:35 np0005481680 ovn_controller[154617]: 2025-10-12T21:28:35Z|00068|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 12 17:28:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:36 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 12 17:28:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:36 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 12 17:28:36 np0005481680 nova_compute[264665]: 2025-10-12 21:28:36.682 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:36 np0005481680 nova_compute[264665]: 2025-10-12 21:28:36.683 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:36 np0005481680 nova_compute[264665]: 2025-10-12 21:28:36.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 88 op/s
Oct 12 17:28:37 np0005481680 podman[279149]: 2025-10-12 21:28:37.126457694 +0000 UTC m=+0.082022717 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 12 17:28:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:37.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:37.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:37 np0005481680 nova_compute[264665]: 2025-10-12 21:28:37.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:37 np0005481680 nova_compute[264665]: 2025-10-12 21:28:37.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:37 np0005481680 nova_compute[264665]: 2025-10-12 21:28:37.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:37 np0005481680 nova_compute[264665]: 2025-10-12 21:28:37.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:28:38 np0005481680 nova_compute[264665]: 2025-10-12 21:28:38.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:38.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 88 op/s
Oct 12 17:28:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.731 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.731 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.731 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.732 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:28:39 np0005481680 nova_compute[264665]: 2025-10-12 21:28:39.732 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:28:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:28:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619222202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.218 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.471 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.473 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4585MB free_disk=59.94365310668945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.474 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.475 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.619 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.620 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.688 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:28:40 np0005481680 nova_compute[264665]: 2025-10-12 21:28:40.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 12 17:28:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:28:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1814728421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.194 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.200 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.234 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.268 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.269 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.269 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:41.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:41 np0005481680 nova_compute[264665]: 2025-10-12 21:28:41.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:42] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:28:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:42] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.273 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.299 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.299 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.300 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.318 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:28:42 np0005481680 nova_compute[264665]: 2025-10-12 21:28:42.318 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 12 17:28:43 np0005481680 podman[279236]: 2025-10-12 21:28:43.130564903 +0000 UTC m=+0.086480090 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 12 17:28:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:43 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212844 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 12 17:28:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:44 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:44 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:44.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 300 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 12 17:28:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:45 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:45 np0005481680 nova_compute[264665]: 2025-10-12 21:28:45.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:28:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:28:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:46 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:46 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:47 np0005481680 nova_compute[264665]: 2025-10-12 21:28:46.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.011662382 +0000 UTC m=+0.066317814 container create 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 17:28:47 np0005481680 systemd[1]: Started libpod-conmon-45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def.scope.
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:46.984575781 +0000 UTC m=+0.039231293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 108 KiB/s wr, 36 op/s
Oct 12 17:28:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.122802981 +0000 UTC m=+0.177458493 container init 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.131018791 +0000 UTC m=+0.185674253 container start 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.135595617 +0000 UTC m=+0.190251139 container attach 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 12 17:28:47 np0005481680 intelligent_jennings[279474]: 167 167
Oct 12 17:28:47 np0005481680 systemd[1]: libpod-45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def.scope: Deactivated successfully.
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.140642867 +0000 UTC m=+0.195298319 container died 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:28:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b283f15f3d5c626c7cab2838a5afdc7d0b38a59f44d9220d208f2c83b8ad39a9-merged.mount: Deactivated successfully.
Oct 12 17:28:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:28:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:47 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:28:47 np0005481680 podman[279458]: 2025-10-12 21:28:47.196017872 +0000 UTC m=+0.250673324 container remove 45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:28:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:47.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:47 np0005481680 systemd[1]: libpod-conmon-45b7a955f802006235669c2e20bc6ec3936e6b21dc0c88e7d18a3dc2e0994def.scope: Deactivated successfully.
Oct 12 17:28:47 np0005481680 podman[279500]: 2025-10-12 21:28:47.443473322 +0000 UTC m=+0.065696589 container create 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 12 17:28:47 np0005481680 systemd[1]: Started libpod-conmon-64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd.scope.
Oct 12 17:28:47 np0005481680 podman[279500]: 2025-10-12 21:28:47.415449066 +0000 UTC m=+0.037672393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:47 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:47 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48640025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:47 np0005481680 podman[279500]: 2025-10-12 21:28:47.571165444 +0000 UTC m=+0.193388691 container init 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct 12 17:28:47 np0005481680 podman[279500]: 2025-10-12 21:28:47.585560902 +0000 UTC m=+0.207784179 container start 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:28:47 np0005481680 podman[279500]: 2025-10-12 21:28:47.590390625 +0000 UTC m=+0.212613872 container attach 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:28:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:47 np0005481680 strange_mccarthy[279517]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:28:47 np0005481680 strange_mccarthy[279517]: --> All data devices are unavailable
Oct 12 17:28:47 np0005481680 systemd[1]: libpod-64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd.scope: Deactivated successfully.
Oct 12 17:28:48 np0005481680 podman[279533]: 2025-10-12 21:28:48.032526539 +0000 UTC m=+0.033967978 container died 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:28:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-058b35e6d6a0d239f745544f2338af6d074476c6ac334325897fcccbc2b4d219-merged.mount: Deactivated successfully.
Oct 12 17:28:48 np0005481680 podman[279533]: 2025-10-12 21:28:48.107410712 +0000 UTC m=+0.108852131 container remove 64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:28:48 np0005481680 systemd[1]: libpod-conmon-64498145c0b634d9a8ca4c46f6aeb6b2a5b5e334a7e4eb8a5a1106e8d9288bdd.scope: Deactivated successfully.
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:28:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092230123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:28:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092230123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:28:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:48 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212848 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:28:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:48 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:48 np0005481680 podman[279641]: 2025-10-12 21:28:48.90242318 +0000 UTC m=+0.087634739 container create d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:28:48 np0005481680 podman[279641]: 2025-10-12 21:28:48.856102866 +0000 UTC m=+0.041314505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:48 np0005481680 systemd[1]: Started libpod-conmon-d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b.scope.
Oct 12 17:28:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 108 KiB/s wr, 36 op/s
Oct 12 17:28:49 np0005481680 podman[279641]: 2025-10-12 21:28:49.132657562 +0000 UTC m=+0.317869161 container init d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:28:49 np0005481680 podman[279641]: 2025-10-12 21:28:49.144807791 +0000 UTC m=+0.330019340 container start d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:28:49 np0005481680 quirky_hodgkin[279658]: 167 167
Oct 12 17:28:49 np0005481680 systemd[1]: libpod-d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b.scope: Deactivated successfully.
Oct 12 17:28:49 np0005481680 podman[279641]: 2025-10-12 21:28:49.159603819 +0000 UTC m=+0.344815418 container attach d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:28:49 np0005481680 podman[279641]: 2025-10-12 21:28:49.160620226 +0000 UTC m=+0.345831785 container died d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:28:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6dc3ccb1c239b4628e479e95a3c6784d5d6319274837f3803af848748b832671-merged.mount: Deactivated successfully.
Oct 12 17:28:49 np0005481680 podman[279641]: 2025-10-12 21:28:49.321199857 +0000 UTC m=+0.506411416 container remove d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:28:49 np0005481680 systemd[1]: libpod-conmon-d5b0d36c15bf2a278ad7047b01ee0541a2a3feadbade48760408e9a03b67669b.scope: Deactivated successfully.
Oct 12 17:28:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:49 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:49 np0005481680 podman[279685]: 2025-10-12 21:28:49.579130436 +0000 UTC m=+0.073566251 container create 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:28:49 np0005481680 podman[279685]: 2025-10-12 21:28:49.543988587 +0000 UTC m=+0.038424453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:49 np0005481680 systemd[1]: Started libpod-conmon-735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4.scope.
Oct 12 17:28:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4aab0f309603ba5c8d21a3e5a215c36b1a4bfd565a23c729dc66ed1cbc3568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4aab0f309603ba5c8d21a3e5a215c36b1a4bfd565a23c729dc66ed1cbc3568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4aab0f309603ba5c8d21a3e5a215c36b1a4bfd565a23c729dc66ed1cbc3568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4aab0f309603ba5c8d21a3e5a215c36b1a4bfd565a23c729dc66ed1cbc3568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:49 np0005481680 podman[279685]: 2025-10-12 21:28:49.764921321 +0000 UTC m=+0.259357196 container init 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:28:49 np0005481680 podman[279685]: 2025-10-12 21:28:49.777198775 +0000 UTC m=+0.271634590 container start 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:28:49 np0005481680 podman[279685]: 2025-10-12 21:28:49.811724197 +0000 UTC m=+0.306160032 container attach 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:28:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]: {
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:    "0": [
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:        {
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "devices": [
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "/dev/loop3"
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            ],
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "lv_name": "ceph_lv0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "lv_size": "21470642176",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "name": "ceph_lv0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "tags": {
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.cluster_name": "ceph",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.crush_device_class": "",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.encrypted": "0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.osd_id": "0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.type": "block",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.vdo": "0",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:                "ceph.with_tpm": "0"
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            },
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "type": "block",
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:            "vg_name": "ceph_vg0"
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:        }
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]:    ]
Oct 12 17:28:50 np0005481680 hardcore_fermi[279702]: }
Oct 12 17:28:50 np0005481680 podman[279685]: 2025-10-12 21:28:50.134733288 +0000 UTC m=+0.629169113 container died 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:28:50 np0005481680 systemd[1]: libpod-735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4.scope: Deactivated successfully.
Oct 12 17:28:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6e4aab0f309603ba5c8d21a3e5a215c36b1a4bfd565a23c729dc66ed1cbc3568-merged.mount: Deactivated successfully.
Oct 12 17:28:50 np0005481680 podman[279685]: 2025-10-12 21:28:50.371864155 +0000 UTC m=+0.866299980 container remove 735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:28:50 np0005481680 systemd[1]: libpod-conmon-735a0d8fc55cf1d9579759d142c4d6bf1c2932781f5b587e4c5fe429cc17aad4.scope: Deactivated successfully.
Oct 12 17:28:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:50 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48640025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:50 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:50 np0005481680 nova_compute[264665]: 2025-10-12 21:28:50.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 110 KiB/s wr, 36 op/s
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.247897942 +0000 UTC m=+0.096047244 container create b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.197149136 +0000 UTC m=+0.045298448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:51 np0005481680 systemd[1]: Started libpod-conmon-b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d.scope.
Oct 12 17:28:51 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.549416115 +0000 UTC m=+0.397565477 container init b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.560842057 +0000 UTC m=+0.408991359 container start b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:28:51 np0005481680 zealous_agnesi[279834]: 167 167
Oct 12 17:28:51 np0005481680 systemd[1]: libpod-b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d.scope: Deactivated successfully.
Oct 12 17:28:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:51 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48440016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.693590008 +0000 UTC m=+0.541739380 container attach b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:28:51 np0005481680 podman[279817]: 2025-10-12 21:28:51.694804249 +0000 UTC m=+0.542953561 container died b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 12 17:28:51 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0a2fbd17b3a0f3b4fc324c3a48c8f98280f06a13011af11280d0bc5b710e0d28-merged.mount: Deactivated successfully.
Oct 12 17:28:52 np0005481680 nova_compute[264665]: 2025-10-12 21:28:52.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:52] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:28:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:28:52] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:28:52 np0005481680 podman[279817]: 2025-10-12 21:28:52.295022011 +0000 UTC m=+1.143171313 container remove b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_agnesi, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:28:52 np0005481680 systemd[1]: libpod-conmon-b15aead683736c4c143b209dc24f32654ddfdc17e49ddfdc46c04e635caded1d.scope: Deactivated successfully.
Oct 12 17:28:52 np0005481680 podman[279862]: 2025-10-12 21:28:52.619847278 +0000 UTC m=+0.118335003 container create f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:28:52 np0005481680 podman[279862]: 2025-10-12 21:28:52.544782491 +0000 UTC m=+0.043270236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:28:52 np0005481680 systemd[1]: Started libpod-conmon-f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c.scope.
Oct 12 17:28:52 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:28:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280879a8bebd62580db6fa4934f866df8745443265a677c7386e7f9b8c2071b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280879a8bebd62580db6fa4934f866df8745443265a677c7386e7f9b8c2071b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280879a8bebd62580db6fa4934f866df8745443265a677c7386e7f9b8c2071b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:52 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280879a8bebd62580db6fa4934f866df8745443265a677c7386e7f9b8c2071b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:28:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:52 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48400016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:52 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48640032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:28:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:28:52 np0005481680 podman[279862]: 2025-10-12 21:28:52.883702868 +0000 UTC m=+0.382190583 container init f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:28:52 np0005481680 podman[279862]: 2025-10-12 21:28:52.891942239 +0000 UTC m=+0.390429944 container start f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:28:52 np0005481680 podman[279862]: 2025-10-12 21:28:52.908626855 +0000 UTC m=+0.407114560 container attach f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:28:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 0 op/s
Oct 12 17:28:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:53 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:53.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:53 np0005481680 lvm[279956]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:28:53 np0005481680 lvm[279956]: VG ceph_vg0 finished
Oct 12 17:28:53 np0005481680 quizzical_davinci[279879]: {}
Oct 12 17:28:53 np0005481680 systemd[1]: libpod-f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c.scope: Deactivated successfully.
Oct 12 17:28:53 np0005481680 systemd[1]: libpod-f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c.scope: Consumed 1.650s CPU time.
Oct 12 17:28:53 np0005481680 podman[279862]: 2025-10-12 21:28:53.912309693 +0000 UTC m=+1.410797438 container died f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:28:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7280879a8bebd62580db6fa4934f866df8745443265a677c7386e7f9b8c2071b-merged.mount: Deactivated successfully.
Oct 12 17:28:54 np0005481680 podman[279862]: 2025-10-12 21:28:54.143124289 +0000 UTC m=+1.641612024 container remove f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:28:54 np0005481680 systemd[1]: libpod-conmon-f35c33742966f4b830b22626f9e81d85cb1408438bff4389fe7de0ef8615509c.scope: Deactivated successfully.
Oct 12 17:28:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:28:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:28:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:54 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:28:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:54 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:54.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 1 op/s
Oct 12 17:28:55 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:55 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:28:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:55 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48640032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:55 np0005481680 nova_compute[264665]: 2025-10-12 21:28:55.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:56 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:56 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:57 np0005481680 nova_compute[264665]: 2025-10-12 21:28:57.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:28:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:28:57 np0005481680 podman[280001]: 2025-10-12 21:28:57.156539394 +0000 UTC m=+0.106225584 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 12 17:28:57 np0005481680 podman[280002]: 2025-10-12 21:28:57.20414517 +0000 UTC m=+0.153859740 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 12 17:28:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:28:57.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:28:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:57 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:58 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:58 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:28:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:28:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:28:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:28:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:28:59 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:28:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:28:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:28:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:28:59.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:28:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:00 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:00 np0005481680 nova_compute[264665]: 2025-10-12 21:29:00.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:00 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.003000075s ======
Oct 12 17:29:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:00.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Oct 12 17:29:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:29:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:01 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:01.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:02] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:29:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:02] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:29:02 np0005481680 nova_compute[264665]: 2025-10-12 21:29:02.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:02 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:02 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 0 op/s
Oct 12 17:29:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:29:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:29:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:03 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:04 np0005481680 nova_compute[264665]: 2025-10-12 21:29:04.694 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:04 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:04 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:04.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:29:05 np0005481680 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 12 17:29:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:05 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:05.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:05 np0005481680 nova_compute[264665]: 2025-10-12 21:29:05.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:06 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:06 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:06.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:07 np0005481680 nova_compute[264665]: 2025-10-12 21:29:07.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:29:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:07.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:07 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:07.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:08 np0005481680 podman[280085]: 2025-10-12 21:29:08.126857479 +0000 UTC m=+0.087829765 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:29:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:08 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:08 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:29:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:09 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:10 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:10 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:10 np0005481680 nova_compute[264665]: 2025-10-12 21:29:10.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 12 17:29:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:11 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:12] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:29:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:12] "GET /metrics HTTP/1.1" 200 48395 "" "Prometheus/2.51.0"
Oct 12 17:29:12 np0005481680 nova_compute[264665]: 2025-10-12 21:29:12.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:12 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:12 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:12.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 12 17:29:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:13 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:13.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:14 np0005481680 podman[280111]: 2025-10-12 21:29:14.115918767 +0000 UTC m=+0.075893660 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:29:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/212914 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:29:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:14 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:14 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:14.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 12 17:29:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:15 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:29:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:29:15 np0005481680 nova_compute[264665]: 2025-10-12 21:29:15.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:16 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:16 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:16.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 12 17:29:17 np0005481680 nova_compute[264665]: 2025-10-12 21:29:17.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:17.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:17 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:29:18
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', '.nfs', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images']
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:29:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:29:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:29:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:29:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:29:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:18.366 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:18 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011091491258058215 of space, bias 1.0, pg target 0.33274473774174645 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:29:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:18 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:29:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:18.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:29:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:29:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 12 17:29:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:19 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:19.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:20 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 17:29:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:20 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:20 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:20.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:20 np0005481680 nova_compute[264665]: 2025-10-12 21:29:20.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 12 17:29:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:21 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:21.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:22] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Oct 12 17:29:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:22] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Oct 12 17:29:22 np0005481680 nova_compute[264665]: 2025-10-12 21:29:22.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:22 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:22 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:22.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 12 17:29:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:23 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f48380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:24 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:24 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:24.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 188 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 98 op/s
Oct 12 17:29:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:25 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:25.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:25 np0005481680 nova_compute[264665]: 2025-10-12 21:29:25.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:26 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:26 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 188 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Oct 12 17:29:27 np0005481680 nova_compute[264665]: 2025-10-12 21:29:27.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:27.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:27 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:27.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:28 np0005481680 podman[280171]: 2025-10-12 21:29:28.123381677 +0000 UTC m=+0.075504730 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:29:28 np0005481680 podman[280172]: 2025-10-12 21:29:28.174199225 +0000 UTC m=+0.124700346 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:29:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:28 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:28 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 188 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Oct 12 17:29:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:29 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:30 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:31 np0005481680 nova_compute[264665]: 2025-10-12 21:29:31.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 12 17:29:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:31 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:31.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:32] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Oct 12 17:29:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:32] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Oct 12 17:29:32 np0005481680 nova_compute[264665]: 2025-10-12 21:29:32.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:32 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:32 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 12 17:29:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:29:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:29:33 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:33 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:34 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:34 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:34 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 12 17:29:35 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:35 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:29:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:29:36 np0005481680 nova_compute[264665]: 2025-10-12 21:29:36.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:36.118 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:29:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:36.119 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:29:36 np0005481680 nova_compute[264665]: 2025-10-12 21:29:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:36 np0005481680 nova_compute[264665]: 2025-10-12 21:29:36.681 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:36 np0005481680 nova_compute[264665]: 2025-10-12 21:29:36.682 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:36 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:36 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:29:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:29:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 383 KiB/s rd, 172 KiB/s wr, 44 op/s
Oct 12 17:29:37 np0005481680 nova_compute[264665]: 2025-10-12 21:29:37.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:37.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:37 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:37 np0005481680 nova_compute[264665]: 2025-10-12 21:29:37.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:37 np0005481680 nova_compute[264665]: 2025-10-12 21:29:37.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:37 np0005481680 nova_compute[264665]: 2025-10-12 21:29:37.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:29:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:37.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:38 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:29:38.123 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:29:38 np0005481680 nova_compute[264665]: 2025-10-12 21:29:38.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:38 np0005481680 nova_compute[264665]: 2025-10-12 21:29:38.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:38 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:38 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 383 KiB/s rd, 172 KiB/s wr, 44 op/s
Oct 12 17:29:39 np0005481680 podman[280228]: 2025-10-12 21:29:39.157435843 +0000 UTC m=+0.113323706 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 12 17:29:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=cleanup t=2025-10-12T21:29:39.266198782Z level=info msg="Completed cleanup jobs" duration=136.876587ms
Oct 12 17:29:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugins.update.checker t=2025-10-12T21:29:39.280423945Z level=info msg="Update check succeeded" duration=50.966133ms
Oct 12 17:29:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana.update.checker t=2025-10-12T21:29:39.299160293Z level=info msg="Update check succeeded" duration=46.631661ms
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.421287) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579421366, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1097, "num_deletes": 502, "total_data_size": 1268395, "memory_usage": 1296640, "flush_reason": "Manual Compaction"}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579437246, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 945043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28325, "largest_seqno": 29420, "table_properties": {"data_size": 940694, "index_size": 1425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13983, "raw_average_key_size": 19, "raw_value_size": 929680, "raw_average_value_size": 1294, "num_data_blocks": 62, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304514, "oldest_key_time": 1760304514, "file_creation_time": 1760304579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 15998 microseconds, and 5806 cpu microseconds.
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.437294) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 945043 bytes OK
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.437320) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.448474) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.448498) EVENT_LOG_v1 {"time_micros": 1760304579448490, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.448518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1262314, prev total WAL file size 1262314, number of live WAL files 2.
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.449382) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(922KB)], [62(16MB)]
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579449438, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17763871, "oldest_snapshot_seqno": -1}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5722 keys, 11947520 bytes, temperature: kUnknown
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579558169, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11947520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11911582, "index_size": 20532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 148041, "raw_average_key_size": 25, "raw_value_size": 11810557, "raw_average_value_size": 2064, "num_data_blocks": 824, "num_entries": 5722, "num_filter_entries": 5722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.558768) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11947520 bytes
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.563656) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.8 rd, 109.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(31.4) write-amplify(12.6) OK, records in: 6722, records dropped: 1000 output_compression: NoCompression
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.563687) EVENT_LOG_v1 {"time_micros": 1760304579563673, "job": 34, "event": "compaction_finished", "compaction_time_micros": 109105, "compaction_time_cpu_micros": 46672, "output_level": 6, "num_output_files": 1, "total_output_size": 11947520, "num_input_records": 6722, "num_output_records": 5722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579564788, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304579570656, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.449259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.570904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.570913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.570916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.570919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:29:39.570922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:29:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:39 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.691 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.691 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.691 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.692 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:29:39 np0005481680 nova_compute[264665]: 2025-10-12 21:29:39.692 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:29:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:29:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394009622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.152 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.337 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.338 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4585MB free_disk=59.897064208984375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.338 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.339 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.443 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.443 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.543 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing inventories for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.562 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating ProviderTree inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.562 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.595 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing aggregate associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.637 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing trait associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 12 17:29:40 np0005481680 nova_compute[264665]: 2025-10-12 21:29:40.674 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:29:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:40 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:40 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:40 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:40.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:29:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949035153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.111 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.117 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:29:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 383 KiB/s rd, 183 KiB/s wr, 45 op/s
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.138 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.140 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:29:41 np0005481680 nova_compute[264665]: 2025-10-12 21:29:41.141 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:29:41 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:41 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:42] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 12 17:29:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:42] "GET /metrics HTTP/1.1" 200 48402 "" "Prometheus/2.51.0"
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.142 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.142 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.143 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.161 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:42 np0005481680 nova_compute[264665]: 2025-10-12 21:29:42.162 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:29:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:42 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 24 KiB/s wr, 2 op/s
Oct 12 17:29:43 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:43 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:44 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:44 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:44 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:45 np0005481680 podman[280324]: 2025-10-12 21:29:45.131389774 +0000 UTC m=+0.084922821 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 12 17:29:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 28 KiB/s wr, 2 op/s
Oct 12 17:29:45 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:45 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:46 np0005481680 nova_compute[264665]: 2025-10-12 21:29:46.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:46 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:46 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:46 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 15 KiB/s wr, 2 op/s
Oct 12 17:29:47 np0005481680 nova_compute[264665]: 2025-10-12 21:29:47.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:47.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:47 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:29:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:29:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:29:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:48 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:48 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:48.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 15 KiB/s wr, 2 op/s
Oct 12 17:29:49 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:49 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:50 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:50 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:50 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:51 np0005481680 nova_compute[264665]: 2025-10-12 21:29:51.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 16 KiB/s wr, 2 op/s
Oct 12 17:29:51 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:51 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:29:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:51.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:29:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:52] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:29:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:29:52] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:29:52 np0005481680 nova_compute[264665]: 2025-10-12 21:29:52.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:52 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:52 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.3 KiB/s wr, 0 op/s
Oct 12 17:29:53 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:53 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:53.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:54 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:29:54 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:54 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:54.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.3 KiB/s wr, 0 op/s
Oct 12 17:29:55 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:55 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:56 np0005481680 nova_compute[264665]: 2025-10-12 21:29:56.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:29:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:29:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:56 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:56 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:56 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:56.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1023 B/s wr, 0 op/s
Oct 12 17:29:57 np0005481680 nova_compute[264665]: 2025-10-12 21:29:57.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:29:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:29:57.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:29:57 np0005481680 podman[280602]: 2025-10-12 21:29:57.296260283 +0000 UTC m=+0.078882456 container create e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:29:57 np0005481680 podman[280602]: 2025-10-12 21:29:57.264154613 +0000 UTC m=+0.046776856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:29:57 np0005481680 systemd[1]: Started libpod-conmon-e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8.scope.
Oct 12 17:29:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:29:57 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:29:57 np0005481680 podman[280602]: 2025-10-12 21:29:57.424216121 +0000 UTC m=+0.206838334 container init e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:29:57 np0005481680 podman[280602]: 2025-10-12 21:29:57.437143122 +0000 UTC m=+0.219765305 container start e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:29:57 np0005481680 podman[280602]: 2025-10-12 21:29:57.442333454 +0000 UTC m=+0.224955617 container attach e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:29:57 np0005481680 beautiful_golick[280618]: 167 167
Oct 12 17:29:57 np0005481680 systemd[1]: libpod-e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8.scope: Deactivated successfully.
Oct 12 17:29:57 np0005481680 podman[280623]: 2025-10-12 21:29:57.508658739 +0000 UTC m=+0.043337388 container died e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:29:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bcd54103e9004926c0271721dd9c573ca33ce63f4f59c7358fa463c45e24ffaf-merged.mount: Deactivated successfully.
Oct 12 17:29:57 np0005481680 podman[280623]: 2025-10-12 21:29:57.564474365 +0000 UTC m=+0.099152954 container remove e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_golick, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:29:57 np0005481680 systemd[1]: libpod-conmon-e57a79c9f6ca853f116bc72c00ca761cba8b29f4fb9190fc635d0b92529740d8.scope: Deactivated successfully.
Oct 12 17:29:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:57 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:57.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:57 np0005481680 podman[280646]: 2025-10-12 21:29:57.823777518 +0000 UTC m=+0.071944699 container create f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 12 17:29:57 np0005481680 podman[280646]: 2025-10-12 21:29:57.791826502 +0000 UTC m=+0.039993733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:29:57 np0005481680 systemd[1]: Started libpod-conmon-f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262.scope.
Oct 12 17:29:57 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:29:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:57 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:57 np0005481680 podman[280646]: 2025-10-12 21:29:57.940849578 +0000 UTC m=+0.189016819 container init f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:29:57 np0005481680 podman[280646]: 2025-10-12 21:29:57.953892602 +0000 UTC m=+0.202059793 container start f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:29:57 np0005481680 podman[280646]: 2025-10-12 21:29:57.960297945 +0000 UTC m=+0.208465136 container attach f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:29:58 np0005481680 great_hawking[280663]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:29:58 np0005481680 great_hawking[280663]: --> All data devices are unavailable
Oct 12 17:29:58 np0005481680 systemd[1]: libpod-f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262.scope: Deactivated successfully.
Oct 12 17:29:58 np0005481680 podman[280646]: 2025-10-12 21:29:58.378465377 +0000 UTC m=+0.626632528 container died f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 12 17:29:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-bccb44ab88e8f5af55b612a39e652933ffc8bc59048ccf27c533ca7fdb82b1f5-merged.mount: Deactivated successfully.
Oct 12 17:29:58 np0005481680 podman[280646]: 2025-10-12 21:29:58.430168598 +0000 UTC m=+0.678335759 container remove f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:29:58 np0005481680 systemd[1]: libpod-conmon-f86fcddf98813000bbece39e41a8c297b2487bb0bb945e493bd4ba88c641b262.scope: Deactivated successfully.
Oct 12 17:29:58 np0005481680 podman[280678]: 2025-10-12 21:29:58.508465068 +0000 UTC m=+0.093628833 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:29:58 np0005481680 podman[280681]: 2025-10-12 21:29:58.598578769 +0000 UTC m=+0.182988675 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 12 17:29:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:58 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:58 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:29:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:29:58.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:29:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1023 B/s wr, 0 op/s
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.206042506 +0000 UTC m=+0.067146726 container create 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:29:59 np0005481680 systemd[1]: Started libpod-conmon-602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3.scope.
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.180004501 +0000 UTC m=+0.041108771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:29:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.304794349 +0000 UTC m=+0.165898629 container init 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.316693263 +0000 UTC m=+0.177797493 container start 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.320636114 +0000 UTC m=+0.181740344 container attach 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:29:59 np0005481680 charming_curie[280843]: 167 167
Oct 12 17:29:59 np0005481680 systemd[1]: libpod-602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3.scope: Deactivated successfully.
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.326569395 +0000 UTC m=+0.187673615 container died 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 17:29:59 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3180e190e016d9c94e115456e9165053ff14f513cb1afff17c3f5c3e3e2021dc-merged.mount: Deactivated successfully.
Oct 12 17:29:59 np0005481680 podman[280825]: 2025-10-12 21:29:59.37764206 +0000 UTC m=+0.238746290 container remove 602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:29:59 np0005481680 systemd[1]: libpod-conmon-602e1acd3b05bd2f9d44b3046de5c011c981302c43404d2ee14593146d6050b3.scope: Deactivated successfully.
Oct 12 17:29:59 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:29:59 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:29:59 np0005481680 podman[280867]: 2025-10-12 21:29:59.64150298 +0000 UTC m=+0.071829026 container create fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 12 17:29:59 np0005481680 systemd[1]: Started libpod-conmon-fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c.scope.
Oct 12 17:29:59 np0005481680 podman[280867]: 2025-10-12 21:29:59.617264691 +0000 UTC m=+0.047590777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:29:59 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:29:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700b3e836fc7e8954a746b649c813fd1e7825cdb4148344da615c31e6fedcdd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700b3e836fc7e8954a746b649c813fd1e7825cdb4148344da615c31e6fedcdd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700b3e836fc7e8954a746b649c813fd1e7825cdb4148344da615c31e6fedcdd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:59 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700b3e836fc7e8954a746b649c813fd1e7825cdb4148344da615c31e6fedcdd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:29:59 np0005481680 podman[280867]: 2025-10-12 21:29:59.752126096 +0000 UTC m=+0.182452182 container init fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:29:59 np0005481680 podman[280867]: 2025-10-12 21:29:59.765515388 +0000 UTC m=+0.195841434 container start fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:29:59 np0005481680 podman[280867]: 2025-10-12 21:29:59.769612863 +0000 UTC m=+0.199938949 container attach fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:29:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:29:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:29:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:29:59.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:29:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]: {
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:    "0": [
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:        {
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "devices": [
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "/dev/loop3"
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            ],
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "lv_name": "ceph_lv0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "lv_size": "21470642176",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "name": "ceph_lv0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "tags": {
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.cluster_name": "ceph",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.crush_device_class": "",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.encrypted": "0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.osd_id": "0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.type": "block",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.vdo": "0",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:                "ceph.with_tpm": "0"
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            },
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "type": "block",
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:            "vg_name": "ceph_vg0"
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:        }
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]:    ]
Oct 12 17:30:00 np0005481680 elegant_dirac[280884]: }
Oct 12 17:30:00 np0005481680 systemd[1]: libpod-fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c.scope: Deactivated successfully.
Oct 12 17:30:00 np0005481680 podman[280867]: 2025-10-12 21:30:00.106748774 +0000 UTC m=+0.537074820 container died fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:30:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-700b3e836fc7e8954a746b649c813fd1e7825cdb4148344da615c31e6fedcdd0-merged.mount: Deactivated successfully.
Oct 12 17:30:00 np0005481680 podman[280867]: 2025-10-12 21:30:00.17392486 +0000 UTC m=+0.604250876 container remove fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 12 17:30:00 np0005481680 systemd[1]: libpod-conmon-fcb90ae88cd47a031cfa9b8b29117e61401f5110d506ff88706147180140a29c.scope: Deactivated successfully.
Oct 12 17:30:00 np0005481680 ceph-mon[73608]: overall HEALTH_OK
Oct 12 17:30:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:00 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:00 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:00 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:30:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:00.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:30:00 np0005481680 podman[280995]: 2025-10-12 21:30:00.976528372 +0000 UTC m=+0.068944122 container create e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:30:01 np0005481680 nova_compute[264665]: 2025-10-12 21:30:01.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:01 np0005481680 systemd[1]: Started libpod-conmon-e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431.scope.
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:00.947218904 +0000 UTC m=+0.039634714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:30:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:01.095746067 +0000 UTC m=+0.188161817 container init e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:01.107084147 +0000 UTC m=+0.199499877 container start e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:01.110646168 +0000 UTC m=+0.203061958 container attach e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:30:01 np0005481680 focused_lewin[281012]: 167 167
Oct 12 17:30:01 np0005481680 systemd[1]: libpod-e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431.scope: Deactivated successfully.
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:01.115019579 +0000 UTC m=+0.207435339 container died e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 12 17:30:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 12 17:30:01 np0005481680 systemd[1]: var-lib-containers-storage-overlay-a192f77195b36d653d43c8b379ad7fabbee0cf7a276551a7d37bbed6c8d651d2-merged.mount: Deactivated successfully.
Oct 12 17:30:01 np0005481680 podman[280995]: 2025-10-12 21:30:01.175243787 +0000 UTC m=+0.267659537 container remove e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lewin, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:30:01 np0005481680 systemd[1]: libpod-conmon-e3de0cc531c2b82cd8ef71dfb118274462a61e54ddcc83053cf02f42201ea431.scope: Deactivated successfully.
Oct 12 17:30:01 np0005481680 podman[281039]: 2025-10-12 21:30:01.422249848 +0000 UTC m=+0.075432798 container create df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:30:01 np0005481680 systemd[1]: Started libpod-conmon-df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be.scope.
Oct 12 17:30:01 np0005481680 podman[281039]: 2025-10-12 21:30:01.387332126 +0000 UTC m=+0.040515166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:30:01 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:30:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe907addf59f5097563cd883d304b4e6e217c0c0a2e7765ffa862401f8ffb83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:30:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe907addf59f5097563cd883d304b4e6e217c0c0a2e7765ffa862401f8ffb83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:30:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe907addf59f5097563cd883d304b4e6e217c0c0a2e7765ffa862401f8ffb83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:30:01 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe907addf59f5097563cd883d304b4e6e217c0c0a2e7765ffa862401f8ffb83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:30:01 np0005481680 podman[281039]: 2025-10-12 21:30:01.544604743 +0000 UTC m=+0.197787713 container init df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:30:01 np0005481680 podman[281039]: 2025-10-12 21:30:01.55663594 +0000 UTC m=+0.209818930 container start df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:30:01 np0005481680 podman[281039]: 2025-10-12 21:30:01.561238458 +0000 UTC m=+0.214421428 container attach df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:30:01 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:01 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:01.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:02] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:30:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:02] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Oct 12 17:30:02 np0005481680 nova_compute[264665]: 2025-10-12 21:30:02.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:02 np0005481680 lvm[281132]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:30:02 np0005481680 lvm[281132]: VG ceph_vg0 finished
Oct 12 17:30:02 np0005481680 admiring_saha[281056]: {}
Oct 12 17:30:02 np0005481680 systemd[1]: libpod-df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be.scope: Deactivated successfully.
Oct 12 17:30:02 np0005481680 systemd[1]: libpod-df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be.scope: Consumed 1.547s CPU time.
Oct 12 17:30:02 np0005481680 podman[281039]: 2025-10-12 21:30:02.465769313 +0000 UTC m=+1.118952293 container died df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:30:02 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3fe907addf59f5097563cd883d304b4e6e217c0c0a2e7765ffa862401f8ffb83-merged.mount: Deactivated successfully.
Oct 12 17:30:02 np0005481680 podman[281039]: 2025-10-12 21:30:02.524292418 +0000 UTC m=+1.177475378 container remove df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:30:02 np0005481680 systemd[1]: libpod-conmon-df30274d47f80336965157b8718ebadbe3bc7949b8a40683333b699edf9ad8be.scope: Deactivated successfully.
Oct 12 17:30:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:30:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:30:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:30:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:30:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:02 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:02 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 12 17:30:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:30:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:30:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:30:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:30:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:03 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:03.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:04 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:04 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:04.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct 12 17:30:05 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:05 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:05.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:06 np0005481680 nova_compute[264665]: 2025-10-12 21:30:06.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:06 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:06 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:06 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.3 KiB/s wr, 0 op/s
Oct 12 17:30:07 np0005481680 nova_compute[264665]: 2025-10-12 21:30:07.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:30:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:07 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:08 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:08 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:08.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.3 KiB/s wr, 0 op/s
Oct 12 17:30:09 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:09 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:09.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:10 np0005481680 podman[281206]: 2025-10-12 21:30:10.129955368 +0000 UTC m=+0.084024218 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:30:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:10 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:10 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:10 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:10.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:11 np0005481680 nova_compute[264665]: 2025-10-12 21:30:11.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.3 KiB/s wr, 1 op/s
Oct 12 17:30:11 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:11 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:30:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:12] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Oct 12 17:30:12 np0005481680 nova_compute[264665]: 2025-10-12 21:30:12.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:12 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:12 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:12.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:30:13 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:13 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:13.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:14 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:14 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:14 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:14.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.0 KiB/s wr, 0 op/s
Oct 12 17:30:15 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:15 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:15.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:16 np0005481680 nova_compute[264665]: 2025-10-12 21:30:16.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:16 np0005481680 podman[281234]: 2025-10-12 21:30:16.127341425 +0000 UTC m=+0.086150511 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 12 17:30:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:16 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4840003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:16 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:16 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4868009350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:30:17 np0005481680 nova_compute[264665]: 2025-10-12 21:30:17.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:17.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:30:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:17 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:30:18
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:30:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:30:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:30:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:18.367 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:30:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:18.368 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:30:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:18.368 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:18 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015229338615940613 of space, bias 1.0, pg target 0.4568801584782184 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:30:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:30:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:18 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:18.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 12 17:30:19 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:19 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:20 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:20 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:20 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:21 np0005481680 nova_compute[264665]: 2025-10-12 21:30:21.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Oct 12 17:30:21 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:21 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4844003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:22] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:30:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:22] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:30:22 np0005481680 nova_compute[264665]: 2025-10-12 21:30:22.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:22 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:22 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 4.7 KiB/s wr, 0 op/s
Oct 12 17:30:23 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:23 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:24 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4838001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:24 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f485c004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 1 op/s
Oct 12 17:30:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:25 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 12 17:30:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:26 np0005481680 nova_compute[264665]: 2025-10-12 21:30:26.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:26 np0005481680 kernel: ganesha.nfsd[280292]: segfault at 50 ip 00007f491628732e sp 00007f48cdffa210 error 4 in libntirpc.so.5.8[7f491626c000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 12 17:30:26 np0005481680 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 12 17:30:26 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd[279101]: 12/10/2025 21:30:26 : epoch 68ec1d7e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4864002ee0 fd 38 proxy ignored for local
Oct 12 17:30:26 np0005481680 systemd[1]: Started Process Core Dump (PID 281290/UID 0).
Oct 12 17:30:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 0 op/s
Oct 12 17:30:27 np0005481680 nova_compute[264665]: 2025-10-12 21:30:27.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:27.227Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:30:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:27.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:30:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:29.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:29 np0005481680 podman[281294]: 2025-10-12 21:30:29.136464717 +0000 UTC m=+0.087056855 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 12 17:30:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 0 op/s
Oct 12 17:30:29 np0005481680 podman[281296]: 2025-10-12 21:30:29.216895321 +0000 UTC m=+0.164508523 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 12 17:30:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:31 np0005481680 nova_compute[264665]: 2025-10-12 21:30:31.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 3.8 KiB/s wr, 8 op/s
Oct 12 17:30:31 np0005481680 systemd-coredump[281291]: Process 279105 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007f491628732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 12 17:30:31 np0005481680 systemd[1]: systemd-coredump@16-281290-0.service: Deactivated successfully.
Oct 12 17:30:31 np0005481680 systemd[1]: systemd-coredump@16-281290-0.service: Consumed 1.239s CPU time.
Oct 12 17:30:31 np0005481680 podman[281348]: 2025-10-12 21:30:31.649858279 +0000 UTC m=+0.041295796 container died b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:30:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e12ccf33d29f9c2b16c99e5b10cdec9c41fc71b189dacb2c7d16b59d98fa01da-merged.mount: Deactivated successfully.
Oct 12 17:30:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:31.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:32] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:30:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:32] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Oct 12 17:30:32 np0005481680 podman[281348]: 2025-10-12 21:30:32.128870345 +0000 UTC m=+0.520307842 container remove b54dabf5b7b08605f3d8e64c31a94a0b2f5df91a937c345ef7f970c0d76a480f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-nfs-cephfs-2-0-compute-0-hypubd, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:30:32 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Main process exited, code=exited, status=139/n/a
Oct 12 17:30:32 np0005481680 nova_compute[264665]: 2025-10-12 21:30:32.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:32 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:30:32 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.092s CPU time.
Oct 12 17:30:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:33.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.2 KiB/s wr, 7 op/s
Oct 12 17:30:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:30:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:30:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Oct 12 17:30:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:36 np0005481680 nova_compute[264665]: 2025-10-12 21:30:36.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:36.290 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:30:36 np0005481680 nova_compute[264665]: 2025-10-12 21:30:36.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:36.292 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:30:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [WARNING] 284/213036 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 12 17:30:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [NOTICE] 284/213036 (4) : haproxy version is 2.3.17-d1c9119
Oct 12 17:30:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [NOTICE] 284/213036 (4) : path to executable is /usr/local/sbin/haproxy
Oct 12 17:30:36 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf[97138]: [ALERT] 284/213036 (4) : backend 'backend' has no server available!
Oct 12 17:30:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:37.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 12 17:30:37 np0005481680 nova_compute[264665]: 2025-10-12 21:30:37.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:37.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:30:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:38 np0005481680 nova_compute[264665]: 2025-10-12 21:30:38.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:38 np0005481680 nova_compute[264665]: 2025-10-12 21:30:38.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:38 np0005481680 nova_compute[264665]: 2025-10-12 21:30:38.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:38 np0005481680 nova_compute[264665]: 2025-10-12 21:30:38.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 12 17:30:39 np0005481680 nova_compute[264665]: 2025-10-12 21:30:39.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:39 np0005481680 nova_compute[264665]: 2025-10-12 21:30:39.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:30:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.688 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.688 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.689 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.689 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:30:40 np0005481680 nova_compute[264665]: 2025-10-12 21:30:40.690 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:30:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:41 np0005481680 podman[281420]: 2025-10-12 21:30:41.129867129 +0000 UTC m=+0.091198782 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:30:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 12 17:30:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:30:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008844530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.227 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.445 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.447 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4648MB free_disk=59.94242858886719GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.448 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.448 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.526 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.527 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:30:41 np0005481680 nova_compute[264665]: 2025-10-12 21:30:41.544 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:30:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:30:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379329699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:30:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:42] "GET /metrics HTTP/1.1" 200 48390 "" "Prometheus/2.51.0"
Oct 12 17:30:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:42] "GET /metrics HTTP/1.1" 200 48390 "" "Prometheus/2.51.0"
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.025 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.033 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.062 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.065 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.066 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:30:42 np0005481680 nova_compute[264665]: 2025-10-12 21:30:42.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:42 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Scheduled restart job, restart counter is at 17.
Oct 12 17:30:42 np0005481680 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:30:42 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Consumed 2.092s CPU time.
Oct 12 17:30:42 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Start request repeated too quickly.
Oct 12 17:30:42 np0005481680 systemd[1]: ceph-5adb8c35-1b74-5730-a252-62321f654cd5@nfs.cephfs.2.0.compute-0.hypubd.service: Failed with result 'exit-code'.
Oct 12 17:30:42 np0005481680 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.hypubd for 5adb8c35-1b74-5730-a252-62321f654cd5.
Oct 12 17:30:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:43.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.1 KiB/s wr, 20 op/s
Oct 12 17:30:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:45 np0005481680 nova_compute[264665]: 2025-10-12 21:30:45.066 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:45 np0005481680 nova_compute[264665]: 2025-10-12 21:30:45.084 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:30:45 np0005481680 nova_compute[264665]: 2025-10-12 21:30:45.085 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:30:45 np0005481680 nova_compute[264665]: 2025-10-12 21:30:45.085 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:30:45 np0005481680 nova_compute[264665]: 2025-10-12 21:30:45.107 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:30:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Oct 12 17:30:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:45.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:46 np0005481680 nova_compute[264665]: 2025-10-12 21:30:46.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:46 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:30:46.294 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:30:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:47.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:47 np0005481680 podman[281496]: 2025-10-12 21:30:47.137255911 +0000 UTC m=+0.088632365 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 12 17:30:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Oct 12 17:30:47 np0005481680 nova_compute[264665]: 2025-10-12 21:30:47.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:47.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:30:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:30:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:30:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:30:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Oct 12 17:30:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:51.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:51 np0005481680 nova_compute[264665]: 2025-10-12 21:30:51.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 12 17:30:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:52] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:30:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:30:52] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:30:52 np0005481680 nova_compute[264665]: 2025-10-12 21:30:52.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:53.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:30:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:30:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:55.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:30:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:55.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:56 np0005481680 nova_compute[264665]: 2025-10-12 21:30:56.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:57.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 2 op/s
Oct 12 17:30:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:57.236Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:30:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:30:57.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:30:57 np0005481680 nova_compute[264665]: 2025-10-12 21:30:57.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:30:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:30:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:30:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:30:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:30:59.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:30:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 2 op/s
Oct 12 17:30:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:30:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:30:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:30:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:30:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:00 np0005481680 podman[281529]: 2025-10-12 21:31:00.128179271 +0000 UTC m=+0.097498451 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 12 17:31:00 np0005481680 podman[281530]: 2025-10-12 21:31:00.169814175 +0000 UTC m=+0.124246165 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 12 17:31:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:01.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:01 np0005481680 nova_compute[264665]: 2025-10-12 21:31:01.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 2 op/s
Oct 12 17:31:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:02] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:31:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:02] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Oct 12 17:31:02 np0005481680 nova_compute[264665]: 2025-10-12 21:31:02.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:03.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:31:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:31:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 17:31:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:04 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=infra.usagestats t=2025-10-12T21:31:04.223150254Z level=info msg="Usage stats are ready to report"
Oct 12 17:31:04 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:05.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:31:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:31:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:06 np0005481680 nova_compute[264665]: 2025-10-12 21:31:06.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:06 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:07.237Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:31:07 np0005481680 nova_compute[264665]: 2025-10-12 21:31:07.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:07 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:31:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:07.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.034158693 +0000 UTC m=+0.038186647 container create ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:31:08 np0005481680 systemd[1]: Started libpod-conmon-ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33.scope.
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.016511942 +0000 UTC m=+0.020539886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:08 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.144727438 +0000 UTC m=+0.148755402 container init ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.154411725 +0000 UTC m=+0.158439679 container start ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.159018462 +0000 UTC m=+0.163046476 container attach ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:31:08 np0005481680 beautiful_dirac[281797]: 167 167
Oct 12 17:31:08 np0005481680 systemd[1]: libpod-ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33.scope: Deactivated successfully.
Oct 12 17:31:08 np0005481680 conmon[281797]: conmon ef50cf023865bc1ee1df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33.scope/container/memory.events
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.16515566 +0000 UTC m=+0.169183624 container died ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:31:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-220438576d63e20f312d8c7e1eee7f0295bf1fbf7a8e4dc39cd5f060248f0beb-merged.mount: Deactivated successfully.
Oct 12 17:31:08 np0005481680 podman[281781]: 2025-10-12 21:31:08.231881034 +0000 UTC m=+0.235908988 container remove ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dirac, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:31:08 np0005481680 systemd[1]: libpod-conmon-ef50cf023865bc1ee1df112b77c00382d608eda9d2ee46bf1644bb2fc4029b33.scope: Deactivated successfully.
Oct 12 17:31:08 np0005481680 podman[281822]: 2025-10-12 21:31:08.458930254 +0000 UTC m=+0.075986082 container create c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:31:08 np0005481680 systemd[1]: Started libpod-conmon-c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492.scope.
Oct 12 17:31:08 np0005481680 podman[281822]: 2025-10-12 21:31:08.429107702 +0000 UTC m=+0.046163580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:08 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:08 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:08 np0005481680 podman[281822]: 2025-10-12 21:31:08.575262996 +0000 UTC m=+0.192318834 container init c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:31:08 np0005481680 podman[281822]: 2025-10-12 21:31:08.591441229 +0000 UTC m=+0.208497047 container start c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 12 17:31:08 np0005481680 podman[281822]: 2025-10-12 21:31:08.595542404 +0000 UTC m=+0.212598222 container attach c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:31:08 np0005481680 ceph-mon[73608]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 12 17:31:08 np0005481680 confident_archimedes[281838]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:31:08 np0005481680 confident_archimedes[281838]: --> All data devices are unavailable
Oct 12 17:31:09 np0005481680 systemd[1]: libpod-c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492.scope: Deactivated successfully.
Oct 12 17:31:09 np0005481680 podman[281822]: 2025-10-12 21:31:09.018310463 +0000 UTC m=+0.635366281 container died c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:31:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fd639f294086d74a7ebec11164b0ca428bb64611fdfbb90ba15fee86ccbf258a-merged.mount: Deactivated successfully.
Oct 12 17:31:09 np0005481680 podman[281822]: 2025-10-12 21:31:09.088595419 +0000 UTC m=+0.705651247 container remove c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:31:09 np0005481680 systemd[1]: libpod-conmon-c4e3326a6c7d81cf2eb3e3172149a21082ec5249de4ff8f7addc46cee243d492.scope: Deactivated successfully.
Oct 12 17:31:09 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:31:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:09 np0005481680 podman[281960]: 2025-10-12 21:31:09.883252228 +0000 UTC m=+0.071967500 container create 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:31:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:09 np0005481680 systemd[1]: Started libpod-conmon-145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e.scope.
Oct 12 17:31:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:09 np0005481680 podman[281960]: 2025-10-12 21:31:09.854669767 +0000 UTC m=+0.043385089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:10 np0005481680 podman[281960]: 2025-10-12 21:31:10.057862587 +0000 UTC m=+0.246577849 container init 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:31:10 np0005481680 podman[281960]: 2025-10-12 21:31:10.067970206 +0000 UTC m=+0.256685478 container start 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:31:10 np0005481680 podman[281960]: 2025-10-12 21:31:10.073165298 +0000 UTC m=+0.261880560 container attach 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:31:10 np0005481680 compassionate_beaver[281977]: 167 167
Oct 12 17:31:10 np0005481680 systemd[1]: libpod-145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e.scope: Deactivated successfully.
Oct 12 17:31:10 np0005481680 conmon[281977]: conmon 145bd307fae4a57d592d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e.scope/container/memory.events
Oct 12 17:31:10 np0005481680 podman[281960]: 2025-10-12 21:31:10.076472113 +0000 UTC m=+0.265187385 container died 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:31:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-25ab5ecd2551b2d7f2a42552740d50bdae7c64f1b2ec7bc63a8c345be80fdcf9-merged.mount: Deactivated successfully.
Oct 12 17:31:10 np0005481680 podman[281960]: 2025-10-12 21:31:10.133156551 +0000 UTC m=+0.321871813 container remove 145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:31:10 np0005481680 systemd[1]: libpod-conmon-145bd307fae4a57d592dfd3d157ed4378d732290783c4d41459a987bc08dd10e.scope: Deactivated successfully.
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.366708297 +0000 UTC m=+0.054344309 container create 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:31:10 np0005481680 systemd[1]: Started libpod-conmon-89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a.scope.
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.345171367 +0000 UTC m=+0.032807399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:10 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a97f08adfecded38ae615225ded234910f106b86f0f3df5cb5c213c43c4b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a97f08adfecded38ae615225ded234910f106b86f0f3df5cb5c213c43c4b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a97f08adfecded38ae615225ded234910f106b86f0f3df5cb5c213c43c4b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:10 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3a97f08adfecded38ae615225ded234910f106b86f0f3df5cb5c213c43c4b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.467875341 +0000 UTC m=+0.155511413 container init 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.477701992 +0000 UTC m=+0.165338024 container start 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.482124245 +0000 UTC m=+0.169760317 container attach 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]: {
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:    "0": [
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:        {
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "devices": [
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "/dev/loop3"
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            ],
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "lv_name": "ceph_lv0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "lv_size": "21470642176",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "name": "ceph_lv0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "tags": {
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.cluster_name": "ceph",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.crush_device_class": "",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.encrypted": "0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.osd_id": "0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.type": "block",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.vdo": "0",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:                "ceph.with_tpm": "0"
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            },
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "type": "block",
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:            "vg_name": "ceph_vg0"
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:        }
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]:    ]
Oct 12 17:31:10 np0005481680 friendly_kirch[282018]: }
Oct 12 17:31:10 np0005481680 systemd[1]: libpod-89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a.scope: Deactivated successfully.
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.840050948 +0000 UTC m=+0.527686970 container died 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:31:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0e3a97f08adfecded38ae615225ded234910f106b86f0f3df5cb5c213c43c4b7-merged.mount: Deactivated successfully.
Oct 12 17:31:10 np0005481680 podman[282001]: 2025-10-12 21:31:10.90786961 +0000 UTC m=+0.595505632 container remove 89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:31:10 np0005481680 systemd[1]: libpod-conmon-89079efabceb5a2abcf35cf3735e881c563dd1a9f531b59dd61c8e203ce0609a.scope: Deactivated successfully.
Oct 12 17:31:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:11.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:11 np0005481680 nova_compute[264665]: 2025-10-12 21:31:11.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:11 np0005481680 podman[282089]: 2025-10-12 21:31:11.336635463 +0000 UTC m=+0.099408310 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 12 17:31:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:11 np0005481680 podman[282154]: 2025-10-12 21:31:11.828788134 +0000 UTC m=+0.117148453 container create 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 17:31:11 np0005481680 podman[282154]: 2025-10-12 21:31:11.752026353 +0000 UTC m=+0.040386732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:11 np0005481680 systemd[1]: Started libpod-conmon-0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd.scope.
Oct 12 17:31:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:11.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:11 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:12] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:31:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:12] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Oct 12 17:31:12 np0005481680 podman[282154]: 2025-10-12 21:31:12.051296048 +0000 UTC m=+0.339656337 container init 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:31:12 np0005481680 podman[282154]: 2025-10-12 21:31:12.066806264 +0000 UTC m=+0.355166593 container start 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:31:12 np0005481680 nifty_shirley[282170]: 167 167
Oct 12 17:31:12 np0005481680 systemd[1]: libpod-0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd.scope: Deactivated successfully.
Oct 12 17:31:12 np0005481680 podman[282154]: 2025-10-12 21:31:12.11598108 +0000 UTC m=+0.404341399 container attach 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:31:12 np0005481680 podman[282154]: 2025-10-12 21:31:12.116977496 +0000 UTC m=+0.405337825 container died 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:31:12 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8fcfa56dfe8824f0b0373ce7fd8715690200eb61b6becac06c2104d9e80dc074-merged.mount: Deactivated successfully.
Oct 12 17:31:12 np0005481680 nova_compute[264665]: 2025-10-12 21:31:12.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:12 np0005481680 podman[282154]: 2025-10-12 21:31:12.70240424 +0000 UTC m=+0.990764539 container remove 0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:31:12 np0005481680 systemd[1]: libpod-conmon-0d3e5338b07491b8bc5c5995dfe61e98a6c452a573bbfda6ee494aa7bf110dbd.scope: Deactivated successfully.
Oct 12 17:31:12 np0005481680 podman[282194]: 2025-10-12 21:31:12.995749174 +0000 UTC m=+0.107989271 container create 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:31:13 np0005481680 podman[282194]: 2025-10-12 21:31:12.927393157 +0000 UTC m=+0.039633304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:31:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:13 np0005481680 systemd[1]: Started libpod-conmon-1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7.scope.
Oct 12 17:31:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cfd27a7a31cf1d02d824035038472c9fcc1689d982b7f590b05f883094a0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cfd27a7a31cf1d02d824035038472c9fcc1689d982b7f590b05f883094a0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cfd27a7a31cf1d02d824035038472c9fcc1689d982b7f590b05f883094a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013cfd27a7a31cf1d02d824035038472c9fcc1689d982b7f590b05f883094a0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:13 np0005481680 podman[282194]: 2025-10-12 21:31:13.233612149 +0000 UTC m=+0.345852296 container init 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:31:13 np0005481680 podman[282194]: 2025-10-12 21:31:13.246039996 +0000 UTC m=+0.358280083 container start 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 17:31:13 np0005481680 podman[282194]: 2025-10-12 21:31:13.283244467 +0000 UTC m=+0.395484574 container attach 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:31:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:14 np0005481680 lvm[282286]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:31:14 np0005481680 lvm[282286]: VG ceph_vg0 finished
Oct 12 17:31:14 np0005481680 hopeful_nightingale[282210]: {}
Oct 12 17:31:14 np0005481680 systemd[1]: libpod-1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7.scope: Deactivated successfully.
Oct 12 17:31:14 np0005481680 podman[282194]: 2025-10-12 21:31:14.152031629 +0000 UTC m=+1.264271716 container died 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:31:14 np0005481680 systemd[1]: libpod-1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7.scope: Consumed 1.589s CPU time.
Oct 12 17:31:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-013cfd27a7a31cf1d02d824035038472c9fcc1689d982b7f590b05f883094a0b-merged.mount: Deactivated successfully.
Oct 12 17:31:14 np0005481680 podman[282194]: 2025-10-12 21:31:14.234175308 +0000 UTC m=+1.346415405 container remove 1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:31:14 np0005481680 systemd[1]: libpod-conmon-1baee5709f5653d591b41404e0f06427f3be58e4a58dd4519e4a61f7292255c7.scope: Deactivated successfully.
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 31 op/s
Oct 12 17:31:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:15.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:16 np0005481680 nova_compute[264665]: 2025-10-12 21:31:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:17.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:31:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:17.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:31:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:17.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:31:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 31 op/s
Oct 12 17:31:17 np0005481680 nova_compute[264665]: 2025-10-12 21:31:17.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:18 np0005481680 podman[282331]: 2025-10-12 21:31:18.146450654 +0000 UTC m=+0.094618737 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:31:18
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs', '.rgw.root', 'vms', '.mgr', 'images']
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:31:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct 12 17:31:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:31:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:31:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:18.367 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:18.368 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:18.368 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:31:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:31:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:19.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 85 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 12 17:31:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:19.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:31:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:31:21 np0005481680 nova_compute[264665]: 2025-10-12 21:31:21.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 85 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 12 17:31:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:21.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:22] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 12 17:31:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:22] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 12 17:31:22 np0005481680 nova_compute[264665]: 2025-10-12 21:31:22.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 85 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 12 17:31:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:23.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:25.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 12 17:31:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:25.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:26 np0005481680 nova_compute[264665]: 2025-10-12 21:31:26.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:31:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:31:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:27.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.273 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.274 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.289 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:31:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.363 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.363 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.421 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.422 2 INFO nova.compute.claims [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:31:27 np0005481680 nova_compute[264665]: 2025-10-12 21:31:27.524 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:31:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318532420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.010 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.020 2 DEBUG nova.compute.provider_tree [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.035 2 DEBUG nova.scheduler.client.report [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.056 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.058 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.108 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.109 2 DEBUG nova.network.neutron [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.139 2 INFO nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.159 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.238 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.240 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.240 2 INFO nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Creating image(s)#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.279 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.321 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.354 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.359 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.431 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.433 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.434 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.434 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.476 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.483 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:28 np0005481680 nova_compute[264665]: 2025-10-12 21:31:28.983 2 DEBUG nova.policy [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:31:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:29.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 76 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 115 op/s
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.397 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.514 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.658 2 DEBUG nova.objects.instance [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid 55d1b5fb-6799-4992-861f-d00e3165cf1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.675 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.675 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Ensure instance console log exists: /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.676 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.677 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:29 np0005481680 nova_compute[264665]: 2025-10-12 21:31:29.678 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.688 2 DEBUG nova.network.neutron [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Successfully updated port: 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.705 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.706 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.706 2 DEBUG nova.network.neutron [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.781 2 DEBUG nova.compute.manager [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-changed-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.781 2 DEBUG nova.compute.manager [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Refreshing instance network info cache due to event network-changed-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.782 2 DEBUG oslo_concurrency.lockutils [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:31:30 np0005481680 nova_compute[264665]: 2025-10-12 21:31:30.877 2 DEBUG nova.network.neutron [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:31:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:31.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:31 np0005481680 podman[282578]: 2025-10-12 21:31:31.16213621 +0000 UTC m=+0.121461473 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:31:31 np0005481680 podman[282579]: 2025-10-12 21:31:31.190300789 +0000 UTC m=+0.146705618 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller)
Oct 12 17:31:31 np0005481680 nova_compute[264665]: 2025-10-12 21:31:31.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 76 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 980 KiB/s wr, 30 op/s
Oct 12 17:31:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:32] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 12 17:31:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:32] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.857 2 DEBUG nova.network.neutron [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Updating instance_info_cache with network_info: [{"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.875 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.875 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Instance network_info: |[{"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.876 2 DEBUG oslo_concurrency.lockutils [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.877 2 DEBUG nova.network.neutron [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Refreshing network info cache for port 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.883 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Start _get_guest_xml network_info=[{"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.893 2 WARNING nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.913 2 DEBUG nova.virt.libvirt.host [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.915 2 DEBUG nova.virt.libvirt.host [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.921 2 DEBUG nova.virt.libvirt.host [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.922 2 DEBUG nova.virt.libvirt.host [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.922 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.923 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.924 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.925 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.925 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.926 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.926 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.926 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.927 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.927 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.928 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.928 2 DEBUG nova.virt.hardware [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:31:32 np0005481680 nova_compute[264665]: 2025-10-12 21:31:32.932 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:31:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:33.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:31:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 76 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 979 KiB/s wr, 30 op/s
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918430943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:31:33 np0005481680 nova_compute[264665]: 2025-10-12 21:31:33.484 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:33 np0005481680 nova_compute[264665]: 2025-10-12 21:31:33.525 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:33 np0005481680 nova_compute[264665]: 2025-10-12 21:31:33.531 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:33.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:31:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/105991044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.015 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.017 2 DEBUG nova.virt.libvirt.vif [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-261482013',display_name='tempest-TestNetworkBasicOps-server-261482013',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-261482013',id=9,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgmDJhuX4mQ2N9lu5QmSZMawvcKI7kxZ/9TUIj6kqdIHbwCKLtDkfTNcJ5VByVsnkshb8S0pbpn1a5UUDSYM+40pI/2xI7OtIC5Mb47EJG7C7iZQp6YqUAPrPnU/Gfy2w==',key_name='tempest-TestNetworkBasicOps-1278047717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-urses6m6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:31:28Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=55d1b5fb-6799-4992-861f-d00e3165cf1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.018 2 DEBUG nova.network.os_vif_util [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.020 2 DEBUG nova.network.os_vif_util [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.022 2 DEBUG nova.objects.instance [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55d1b5fb-6799-4992-861f-d00e3165cf1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.042 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <uuid>55d1b5fb-6799-4992-861f-d00e3165cf1b</uuid>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <name>instance-00000009</name>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-261482013</nova:name>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:31:32</nova:creationTime>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <nova:port uuid="7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="serial">55d1b5fb-6799-4992-861f-d00e3165cf1b</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="uuid">55d1b5fb-6799-4992-861f-d00e3165cf1b</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/55d1b5fb-6799-4992-861f-d00e3165cf1b_disk">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:32:b5:a7"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <target dev="tap7c6f02be-9e"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/console.log" append="off"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:31:34 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:31:34 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:31:34 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:31:34 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.044 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Preparing to wait for external event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.044 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.045 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.045 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.046 2 DEBUG nova.virt.libvirt.vif [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-261482013',display_name='tempest-TestNetworkBasicOps-server-261482013',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-261482013',id=9,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgmDJhuX4mQ2N9lu5QmSZMawvcKI7kxZ/9TUIj6kqdIHbwCKLtDkfTNcJ5VByVsnkshb8S0pbpn1a5UUDSYM+40pI/2xI7OtIC5Mb47EJG7C7iZQp6YqUAPrPnU/Gfy2w==',key_name='tempest-TestNetworkBasicOps-1278047717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-urses6m6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:31:28Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=55d1b5fb-6799-4992-861f-d00e3165cf1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.047 2 DEBUG nova.network.os_vif_util [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.048 2 DEBUG nova.network.os_vif_util [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.048 2 DEBUG os_vif [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c6f02be-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c6f02be-9e, col_values=(('external_ids', {'iface-id': '7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:b5:a7', 'vm-uuid': '55d1b5fb-6799-4992-861f-d00e3165cf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:34 np0005481680 NetworkManager[44859]: <info>  [1760304694.0591] manager: (tap7c6f02be-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.069 2 INFO os_vif [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e')#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.139 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.140 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.140 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:32:b5:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.141 2 INFO nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Using config drive#033[00m
Oct 12 17:31:34 np0005481680 nova_compute[264665]: 2025-10-12 21:31:34.179 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.085 2 INFO nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Creating config drive at /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.094 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rtz9hgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.225 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rtz9hgt" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.269 2 DEBUG nova.storage.rbd_utils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.274 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.495 2 DEBUG oslo_concurrency.processutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config 55d1b5fb-6799-4992-861f-d00e3165cf1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.497 2 INFO nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Deleting local config drive /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b/disk.config because it was imported into RBD.#033[00m
Oct 12 17:31:35 np0005481680 systemd[1]: Starting libvirt secret daemon...
Oct 12 17:31:35 np0005481680 systemd[1]: Started libvirt secret daemon.
Oct 12 17:31:35 np0005481680 kernel: tap7c6f02be-9e: entered promiscuous mode
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.6302] manager: (tap7c6f02be-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:35Z|00069|binding|INFO|Claiming lport 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 for this chassis.
Oct 12 17:31:35 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:35Z|00070|binding|INFO|7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29: Claiming fa:16:3e:32:b5:a7 10.100.0.10
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.6608] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.6618] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.668 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:b5:a7 10.100.0.10'], port_security=['fa:16:3e:32:b5:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-803245783', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '55d1b5fb-6799-4992-861f-d00e3165cf1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-803245783', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '7', 'neutron:security_group_ids': '45c1af83-66cf-4f12-b9f3-589fae4453b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07777841-6990-488a-9c43-2ff73eafd022, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.670 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 in datapath 8c705942-f176-43ea-a8ac-b9d641f70c3f bound to our chassis#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.672 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c705942-f176-43ea-a8ac-b9d641f70c3f#033[00m
Oct 12 17:31:35 np0005481680 systemd-machined[218338]: New machine qemu-4-instance-00000009.
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.691 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[aaffda02-a1b5-4d28-81f7-fb600e0e7a9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.691 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c705942-f1 in ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.694 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c705942-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.694 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[42dbf64e-d348-47a8-a917-f8f798ecb291]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.695 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[55b0fb8f-3a5a-44ff-84c3-136a7249f743]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.712 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e4aea6-bf64-4dc7-90cc-f27b7c6d9907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 systemd-udevd[282785]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.745 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b307ef-705e-4a28-aa91-4a5828652c8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.7532] device (tap7c6f02be-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.7555] device (tap7c6f02be-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:35Z|00071|binding|INFO|Setting lport 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 ovn-installed in OVS
Oct 12 17:31:35 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:35Z|00072|binding|INFO|Setting lport 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 up in Southbound
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.787 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[8ceaf656-2ece-4a2b-a104-8cb54b877b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.795 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cacff2-0801-4603-934f-cc2ba4ba4db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.7974] manager: (tap8c705942-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.855 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[f8dfe728-0353-49a3-845d-e70604511436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.861 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[cc24c5b9-3a99-49f3-a788-8d760ba74553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.890 2 DEBUG nova.network.neutron [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Updated VIF entry in instance network info cache for port 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.891 2 DEBUG nova.network.neutron [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Updating instance_info_cache with network_info: [{"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:31:35 np0005481680 NetworkManager[44859]: <info>  [1760304695.8937] device (tap8c705942-f0): carrier: link connected
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.903 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[08846290-7989-4060-b2de-e36b9ae26a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 nova_compute[264665]: 2025-10-12 21:31:35.916 2 DEBUG oslo_concurrency.lockutils [req-de803631-aa53-4df1-ac07-85bc130de69b req-aff77fbd-8df2-4c5c-b6ac-272b6274f9af 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-55d1b5fb-6799-4992-861f-d00e3165cf1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.926 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[500f01fd-831d-4c7c-bee8-637a6c188b47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c705942-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:76:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436252, 'reachable_time': 40944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282816, 'error': None, 'target': 'ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.944 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[efcc7025-7bed-412f-9026-b2cc51f04988]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:7653'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436252, 'tstamp': 436252}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282817, 'error': None, 'target': 'ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:35 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:35.966 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[458a01c2-9ca7-4817-9679-ea63152f36d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c705942-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:76:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436252, 'reachable_time': 40944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282818, 'error': None, 'target': 'ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.003 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a4cddc-1d9e-43a5-9d65-4fbef94588a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.040 2 DEBUG nova.compute.manager [req-31a50751-4376-4e73-b2ff-c2d929a45360 req-a4263d4e-a139-43cf-a3dd-43d0e0ec9fda 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.040 2 DEBUG oslo_concurrency.lockutils [req-31a50751-4376-4e73-b2ff-c2d929a45360 req-a4263d4e-a139-43cf-a3dd-43d0e0ec9fda 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.041 2 DEBUG oslo_concurrency.lockutils [req-31a50751-4376-4e73-b2ff-c2d929a45360 req-a4263d4e-a139-43cf-a3dd-43d0e0ec9fda 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.041 2 DEBUG oslo_concurrency.lockutils [req-31a50751-4376-4e73-b2ff-c2d929a45360 req-a4263d4e-a139-43cf-a3dd-43d0e0ec9fda 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.041 2 DEBUG nova.compute.manager [req-31a50751-4376-4e73-b2ff-c2d929a45360 req-a4263d4e-a139-43cf-a3dd-43d0e0ec9fda 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Processing event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.079 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[3f280162-df12-4994-86d3-3c006f5ec9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.081 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c705942-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.081 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.082 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c705942-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:36 np0005481680 kernel: tap8c705942-f0: entered promiscuous mode
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:36 np0005481680 NetworkManager[44859]: <info>  [1760304696.0864] manager: (tap8c705942-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.093 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c705942-f0, col_values=(('external_ids', {'iface-id': '7b02f269-b689-4346-bd68-c03fd1ee0e56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:36 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:36Z|00073|binding|INFO|Releasing lport 7b02f269-b689-4346-bd68-c03fd1ee0e56 from this chassis (sb_readonly=0)
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.099 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c705942-f176-43ea-a8ac-b9d641f70c3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c705942-f176-43ea-a8ac-b9d641f70c3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.100 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[28aef8f1-f7f0-422f-8e59-51dd020a9aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.101 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-8c705942-f176-43ea-a8ac-b9d641f70c3f
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/8c705942-f176-43ea-a8ac-b9d641f70c3f.pid.haproxy
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID 8c705942-f176-43ea-a8ac-b9d641f70c3f
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:31:36 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:36.102 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'env', 'PROCESS_TAG=haproxy-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c705942-f176-43ea-a8ac-b9d641f70c3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:36 np0005481680 podman[282892]: 2025-10-12 21:31:36.540836484 +0000 UTC m=+0.039914031 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.778 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304696.777545, 55d1b5fb-6799-4992-861f-d00e3165cf1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.778 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] VM Started (Lifecycle Event)#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.784 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.789 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.795 2 INFO nova.virt.libvirt.driver [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Instance spawned successfully.#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.796 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.802 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:31:36 np0005481680 podman[282892]: 2025-10-12 21:31:36.812190816 +0000 UTC m=+0.311268313 container create 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.823 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.833 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.833 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.834 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.834 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.835 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.836 2 DEBUG nova.virt.libvirt.driver [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.844 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.845 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304696.7777045, 55d1b5fb-6799-4992-861f-d00e3165cf1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.846 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.876 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:31:36 np0005481680 systemd[1]: Started libpod-conmon-5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf.scope.
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.886 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304696.7879956, 55d1b5fb-6799-4992-861f-d00e3165cf1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.886 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.904 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.908 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:31:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.913 2 INFO nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Took 8.68 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.914 2 DEBUG nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:31:36 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/030a8dd48c90770e3fc01864a29b90e03fcf691c446b3fc8139703beca6ee2b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:31:36 np0005481680 podman[282892]: 2025-10-12 21:31:36.939693173 +0000 UTC m=+0.438770710 container init 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 12 17:31:36 np0005481680 podman[282892]: 2025-10-12 21:31:36.952828848 +0000 UTC m=+0.451906345 container start 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:31:36 np0005481680 nova_compute[264665]: 2025-10-12 21:31:36.968 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:31:36 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [NOTICE]   (282912) : New worker (282914) forked
Oct 12 17:31:36 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [NOTICE]   (282912) : Loading success.
Oct 12 17:31:37 np0005481680 nova_compute[264665]: 2025-10-12 21:31:37.008 2 INFO nova.compute.manager [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Took 9.67 seconds to build instance.#033[00m
Oct 12 17:31:37 np0005481680 nova_compute[264665]: 2025-10-12 21:31:37.027 2 DEBUG oslo_concurrency.lockutils [None req-e9585b56-138a-45ef-8de3-bf01c6674870 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:37.241Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:31:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:37.241Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:31:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:37.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:31:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:31:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:37.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.142 2 DEBUG nova.compute.manager [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.143 2 DEBUG oslo_concurrency.lockutils [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.143 2 DEBUG oslo_concurrency.lockutils [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.143 2 DEBUG oslo_concurrency.lockutils [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.144 2 DEBUG nova.compute.manager [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] No waiting events found dispatching network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.144 2 WARNING nova.compute.manager [req-8b7ca5ea-7b20-4902-98e0-a2552a86f75b req-18162c38-71f1-4d1d-bdc8-5f007231847e 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received unexpected event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 for instance with vm_state active and task_state None.#033[00m
Oct 12 17:31:38 np0005481680 nova_compute[264665]: 2025-10-12 21:31:38.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:39 np0005481680 nova_compute[264665]: 2025-10-12 21:31:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:31:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:39.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:31:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 12 17:31:39 np0005481680 nova_compute[264665]: 2025-10-12 21:31:39.658 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:39 np0005481680 nova_compute[264665]: 2025-10-12 21:31:39.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:39 np0005481680 nova_compute[264665]: 2025-10-12 21:31:39.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:39 np0005481680 nova_compute[264665]: 2025-10-12 21:31:39.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:31:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.145 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.146 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.146 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.147 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.147 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.149 2 INFO nova.compute.manager [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Terminating instance#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.151 2 DEBUG nova.compute.manager [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:31:40 np0005481680 kernel: tap7c6f02be-9e (unregistering): left promiscuous mode
Oct 12 17:31:40 np0005481680 NetworkManager[44859]: <info>  [1760304700.1977] device (tap7c6f02be-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:40Z|00074|binding|INFO|Releasing lport 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 from this chassis (sb_readonly=0)
Oct 12 17:31:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:40Z|00075|binding|INFO|Setting lport 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 down in Southbound
Oct 12 17:31:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:31:40Z|00076|binding|INFO|Removing iface tap7c6f02be-9e ovn-installed in OVS
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.215 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:b5:a7 10.100.0.10'], port_security=['fa:16:3e:32:b5:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-803245783', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '55d1b5fb-6799-4992-861f-d00e3165cf1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-803245783', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '9', 'neutron:security_group_ids': '45c1af83-66cf-4f12-b9f3-589fae4453b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07777841-6990-488a-9c43-2ff73eafd022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.216 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 in datapath 8c705942-f176-43ea-a8ac-b9d641f70c3f unbound from our chassis#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.217 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c705942-f176-43ea-a8ac-b9d641f70c3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.218 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1db07b-5c53-49da-a600-e24d5e7b4364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.218 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f namespace which is not needed anymore#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 12 17:31:40 np0005481680 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 4.577s CPU time.
Oct 12 17:31:40 np0005481680 systemd-machined[218338]: Machine qemu-4-instance-00000009 terminated.
Oct 12 17:31:40 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [NOTICE]   (282912) : haproxy version is 2.8.14-c23fe91
Oct 12 17:31:40 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [NOTICE]   (282912) : path to executable is /usr/sbin/haproxy
Oct 12 17:31:40 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [WARNING]  (282912) : Exiting Master process...
Oct 12 17:31:40 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [ALERT]    (282912) : Current worker (282914) exited with code 143 (Terminated)
Oct 12 17:31:40 np0005481680 neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f[282908]: [WARNING]  (282912) : All workers exited. Exiting... (0)
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.396 2 DEBUG nova.compute.manager [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-vif-unplugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.396 2 DEBUG oslo_concurrency.lockutils [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.397 2 DEBUG oslo_concurrency.lockutils [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:40 np0005481680 systemd[1]: libpod-5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf.scope: Deactivated successfully.
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.398 2 DEBUG oslo_concurrency.lockutils [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.398 2 DEBUG nova.compute.manager [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] No waiting events found dispatching network-vif-unplugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.399 2 DEBUG nova.compute.manager [req-6addc427-dd92-4a57-973f-03b7cd25b394 req-4f917aa7-0efe-4685-bd55-41d3c7bd0b0d 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-vif-unplugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.401 2 INFO nova.virt.libvirt.driver [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Instance destroyed successfully.#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.402 2 DEBUG nova.objects.instance [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid 55d1b5fb-6799-4992-861f-d00e3165cf1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:31:40 np0005481680 podman[282952]: 2025-10-12 21:31:40.404367564 +0000 UTC m=+0.070835600 container died 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.418 2 DEBUG nova.virt.libvirt.vif [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:31:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-261482013',display_name='tempest-TestNetworkBasicOps-server-261482013',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-261482013',id=9,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgmDJhuX4mQ2N9lu5QmSZMawvcKI7kxZ/9TUIj6kqdIHbwCKLtDkfTNcJ5VByVsnkshb8S0pbpn1a5UUDSYM+40pI/2xI7OtIC5Mb47EJG7C7iZQp6YqUAPrPnU/Gfy2w==',key_name='tempest-TestNetworkBasicOps-1278047717',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:31:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-urses6m6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:31:36Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=55d1b5fb-6799-4992-861f-d00e3165cf1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.419 2 DEBUG nova.network.os_vif_util [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "address": "fa:16:3e:32:b5:a7", "network": {"id": "8c705942-f176-43ea-a8ac-b9d641f70c3f", "bridge": "br-int", "label": "tempest-network-smoke--1430071369", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c6f02be-9e", "ovs_interfaceid": "7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.420 2 DEBUG nova.network.os_vif_util [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.421 2 DEBUG os_vif [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c6f02be-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.434 2 INFO os_vif [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:b5:a7,bridge_name='br-int',has_traffic_filtering=True,id=7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29,network=Network(8c705942-f176-43ea-a8ac-b9d641f70c3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7c6f02be-9e')#033[00m
Oct 12 17:31:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf-userdata-shm.mount: Deactivated successfully.
Oct 12 17:31:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-030a8dd48c90770e3fc01864a29b90e03fcf691c446b3fc8139703beca6ee2b0-merged.mount: Deactivated successfully.
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.478 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 podman[282952]: 2025-10-12 21:31:40.509211942 +0000 UTC m=+0.175679978 container cleanup 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 12 17:31:40 np0005481680 systemd[1]: libpod-conmon-5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf.scope: Deactivated successfully.
Oct 12 17:31:40 np0005481680 podman[283009]: 2025-10-12 21:31:40.614623655 +0000 UTC m=+0.069422654 container remove 5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.624 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d9ee26-401c-44ff-a621-e25c07214d54]: (4, ('Sun Oct 12 09:31:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f (5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf)\n5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf\nSun Oct 12 09:31:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f (5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf)\n5dd7c7d43e0ce090ddb51d6a0ebd06b5622868e318425800ec70acbbd87f6bdf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.626 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[06ec08ca-0c74-4f40-b7ea-782590ea8b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.627 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c705942-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 kernel: tap8c705942-f0: left promiscuous mode
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.654 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6a1822-d8da-493b-bddf-cf40d4b3d9aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.679 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[6568c593-73da-4d9f-8a21-0c289df32478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.681 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[16e49681-e1ae-495e-9cb0-ec6c4e481519]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.693 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.693 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.694 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.694 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:31:40 np0005481680 nova_compute[264665]: 2025-10-12 21:31:40.694 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.708 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[6423a6bf-a429-4352-a5d1-c6ca4a021236]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436241, 'reachable_time': 38038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283025, 'error': None, 'target': 'ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 systemd[1]: run-netns-ovnmeta\x2d8c705942\x2df176\x2d43ea\x2da8ac\x2db9d641f70c3f.mount: Deactivated successfully.
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.715 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c705942-f176-43ea-a8ac-b9d641f70c3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.715 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[a5833010-7cfe-4392-93bb-5580851eca33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:31:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:40.717 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.014 2 INFO nova.virt.libvirt.driver [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Deleting instance files /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b_del#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.016 2 INFO nova.virt.libvirt.driver [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Deletion of /var/lib/nova/instances/55d1b5fb-6799-4992-861f-d00e3165cf1b_del complete#033[00m
Oct 12 17:31:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:41.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.118 2 INFO nova.compute.manager [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.119 2 DEBUG oslo.service.loopingcall [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.120 2 DEBUG nova.compute.manager [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.120 2 DEBUG nova.network.neutron [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:31:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:31:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185387931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.226 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 846 KiB/s wr, 85 op/s
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.483 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.485 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4570MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.486 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.486 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.743 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance 55d1b5fb-6799-4992-861f-d00e3165cf1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.745 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.745 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:31:41 np0005481680 nova_compute[264665]: 2025-10-12 21:31:41.789 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:42] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:31:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:42] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:31:42 np0005481680 podman[283072]: 2025-10-12 21:31:42.156650815 +0000 UTC m=+0.106336357 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:31:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:31:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999572523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.325 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.333 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.354 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.383 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.383 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.470 2 DEBUG nova.compute.manager [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.471 2 DEBUG oslo_concurrency.lockutils [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.471 2 DEBUG oslo_concurrency.lockutils [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.472 2 DEBUG oslo_concurrency.lockutils [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.472 2 DEBUG nova.compute.manager [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] No waiting events found dispatching network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:31:42 np0005481680 nova_compute[264665]: 2025-10-12 21:31:42.473 2 WARNING nova.compute.manager [req-a4e1af72-745f-4550-8ee6-9482a85d0393 req-6ebdedbc-da4a-4bc3-ad7b-2fca9c41f040 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Received unexpected event network-vif-plugged-7c6f02be-9e9b-4dc6-b7e4-e5625c03ba29 for instance with vm_state active and task_state deleting.#033[00m
Oct 12 17:31:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:43.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.229 2 DEBUG nova.network.neutron [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.267 2 INFO nova.compute.manager [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Took 2.15 seconds to deallocate network for instance.#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.326 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.328 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:31:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 847 KiB/s wr, 85 op/s
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.384 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.384 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.386 2 DEBUG oslo_concurrency.processutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:31:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:31:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849633693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.860 2 DEBUG oslo_concurrency.processutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.868 2 DEBUG nova.compute.provider_tree [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.900 2 DEBUG nova.scheduler.client.report [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.923 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:43 np0005481680 nova_compute[264665]: 2025-10-12 21:31:43.996 2 INFO nova.scheduler.client.report [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance 55d1b5fb-6799-4992-861f-d00e3165cf1b#033[00m
Oct 12 17:31:44 np0005481680 nova_compute[264665]: 2025-10-12 21:31:44.075 2 DEBUG oslo_concurrency.lockutils [None req-9766d6c6-072a-4653-8089-878f56d337b8 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "55d1b5fb-6799-4992-861f-d00e3165cf1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:31:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 111 op/s
Oct 12 17:31:45 np0005481680 nova_compute[264665]: 2025-10-12 21:31:45.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:45 np0005481680 nova_compute[264665]: 2025-10-12 21:31:45.692 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:31:45 np0005481680 nova_compute[264665]: 2025-10-12 21:31:45.693 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:31:45 np0005481680 nova_compute[264665]: 2025-10-12 21:31:45.693 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:31:45 np0005481680 nova_compute[264665]: 2025-10-12 21:31:45.708 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:31:45 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:31:45.719 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:31:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:46 np0005481680 nova_compute[264665]: 2025-10-12 21:31:46.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:47.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:47.247Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:31:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:47.247Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:31:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:47.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:31:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 12 17:31:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:47.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:31:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:31:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:31:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:49.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:49 np0005481680 podman[283150]: 2025-10-12 21:31:49.138413836 +0000 UTC m=+0.092432612 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 12 17:31:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 12 17:31:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:50 np0005481680 nova_compute[264665]: 2025-10-12 21:31:50.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:50 np0005481680 nova_compute[264665]: 2025-10-12 21:31:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:50 np0005481680 nova_compute[264665]: 2025-10-12 21:31:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:51 np0005481680 nova_compute[264665]: 2025-10-12 21:31:51.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 12 17:31:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:31:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:51.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:31:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:52] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:31:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:31:52] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:31:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:53.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 12 17:31:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:53.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:55.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:31:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 12 17:31:55 np0005481680 nova_compute[264665]: 2025-10-12 21:31:55.394 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304700.3911977, 55d1b5fb-6799-4992-861f-d00e3165cf1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:31:55 np0005481680 nova_compute[264665]: 2025-10-12 21:31:55.395 2 INFO nova.compute.manager [-] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:31:55 np0005481680 nova_compute[264665]: 2025-10-12 21:31:55.417 2 DEBUG nova.compute.manager [None req-a65a609f-b7e8-4c77-a906-39b948a4ea00 - - - - - -] [instance: 55d1b5fb-6799-4992-861f-d00e3165cf1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:31:55 np0005481680 nova_compute[264665]: 2025-10-12 21:31:55.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:56 np0005481680 nova_compute[264665]: 2025-10-12 21:31:56.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:31:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:31:57.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:31:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:31:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:31:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:31:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:31:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:31:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:31:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:31:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:31:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:00 np0005481680 nova_compute[264665]: 2025-10-12 21:32:00.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:01.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:01 np0005481680 nova_compute[264665]: 2025-10-12 21:32:01.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:01.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:02] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:32:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:02] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:32:02 np0005481680 podman[283186]: 2025-10-12 21:32:02.174391767 +0000 UTC m=+0.128872242 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:32:02 np0005481680 podman[283187]: 2025-10-12 21:32:02.206483127 +0000 UTC m=+0.155063052 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 12 17:32:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:03.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:32:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:32:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.870 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.871 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.893 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.994 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:05 np0005481680 nova_compute[264665]: 2025-10-12 21:32:05.994 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.008 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.009 2 INFO nova.compute.claims [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.162 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:06 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:32:06 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172142495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.626 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.635 2 DEBUG nova.compute.provider_tree [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.670 2 DEBUG nova.scheduler.client.report [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.694 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.695 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.752 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.752 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.780 2 INFO nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.814 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.957 2 DEBUG nova.policy [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.963 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.965 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:32:06 np0005481680 nova_compute[264665]: 2025-10-12 21:32:06.966 2 INFO nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Creating image(s)#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.008 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.058 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.103 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.110 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:07.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.175 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.176 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.177 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.178 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.220 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.226 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d f03fc7b2-b000-4972-b1ba-904366ff4d34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:07.248Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:07.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.633 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d f03fc7b2-b000-4972-b1ba-904366ff4d34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.749 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.914 2 DEBUG nova.objects.instance [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid f03fc7b2-b000-4972-b1ba-904366ff4d34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.939 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.940 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Ensure instance console log exists: /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.941 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.941 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:07 np0005481680 nova_compute[264665]: 2025-10-12 21:32:07.942 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:08.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:08 np0005481680 nova_compute[264665]: 2025-10-12 21:32:08.103 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Successfully created port: a272c540-5cec-4898-bfe5-aba42a319411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:32:08 np0005481680 ceph-mgr[73901]: [dashboard INFO request] [192.168.122.100:48520] [POST] [200] [0.003s] [4.0B] [41cf7df0-56cf-4a81-a42e-d07387dda449] /api/prometheus_receiver
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.079 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Successfully updated port: a272c540-5cec-4898-bfe5-aba42a319411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.094 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.094 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.095 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:32:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.175 2 DEBUG nova.compute.manager [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-changed-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.175 2 DEBUG nova.compute.manager [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing instance network info cache due to event network-changed-a272c540-5cec-4898-bfe5-aba42a319411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.176 2 DEBUG oslo_concurrency.lockutils [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:32:09 np0005481680 nova_compute[264665]: 2025-10-12 21:32:09.232 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:32:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:32:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:10.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.394 2 DEBUG nova.network.neutron [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updating instance_info_cache with network_info: [{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.411 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.412 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Instance network_info: |[{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.413 2 DEBUG oslo_concurrency.lockutils [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.413 2 DEBUG nova.network.neutron [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing network info cache for port a272c540-5cec-4898-bfe5-aba42a319411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.418 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Start _get_guest_xml network_info=[{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.425 2 WARNING nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.430 2 DEBUG nova.virt.libvirt.host [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.431 2 DEBUG nova.virt.libvirt.host [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.439 2 DEBUG nova.virt.libvirt.host [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.439 2 DEBUG nova.virt.libvirt.host [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.441 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.441 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.442 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.443 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.443 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.444 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.444 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.445 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.445 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.446 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.446 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.447 2 DEBUG nova.virt.hardware [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.451 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:32:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076631336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.936 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.975 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:10 np0005481680 nova_compute[264665]: 2025-10-12 21:32:10.982 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:32:11 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:32:11 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123728799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.460 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.462 2 DEBUG nova.virt.libvirt.vif [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-330538359',display_name='tempest-TestNetworkBasicOps-server-330538359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-330538359',id=10,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEADZUwFVMekygIqAVS23ATWsF5c/ODqFeOdQSvml1oe4ZtKGWFL/PNXhSuam4gmYc/NHW88We3OwxB2B/MwQg+FIx20xpFZ9S9n4lg5X4Nc9WgPBdrw4vCWowpc/0tUWA==',key_name='tempest-TestNetworkBasicOps-1938901682',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-e0qpinvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:32:06Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=f03fc7b2-b000-4972-b1ba-904366ff4d34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.463 2 DEBUG nova.network.os_vif_util [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.465 2 DEBUG nova.network.os_vif_util [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.466 2 DEBUG nova.objects.instance [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid f03fc7b2-b000-4972-b1ba-904366ff4d34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.504 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <uuid>f03fc7b2-b000-4972-b1ba-904366ff4d34</uuid>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <name>instance-0000000a</name>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-330538359</nova:name>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:32:10</nova:creationTime>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <nova:port uuid="a272c540-5cec-4898-bfe5-aba42a319411">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="serial">f03fc7b2-b000-4972-b1ba-904366ff4d34</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="uuid">f03fc7b2-b000-4972-b1ba-904366ff4d34</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/f03fc7b2-b000-4972-b1ba-904366ff4d34_disk">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:3d:4e:cf"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <target dev="tapa272c540-5c"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/console.log" append="off"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:32:11 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:32:11 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:32:11 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:32:11 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.506 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Preparing to wait for external event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.507 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.507 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.508 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.509 2 DEBUG nova.virt.libvirt.vif [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-330538359',display_name='tempest-TestNetworkBasicOps-server-330538359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-330538359',id=10,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEADZUwFVMekygIqAVS23ATWsF5c/ODqFeOdQSvml1oe4ZtKGWFL/PNXhSuam4gmYc/NHW88We3OwxB2B/MwQg+FIx20xpFZ9S9n4lg5X4Nc9WgPBdrw4vCWowpc/0tUWA==',key_name='tempest-TestNetworkBasicOps-1938901682',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-e0qpinvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:32:06Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=f03fc7b2-b000-4972-b1ba-904366ff4d34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.510 2 DEBUG nova.network.os_vif_util [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.511 2 DEBUG nova.network.os_vif_util [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.511 2 DEBUG os_vif [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa272c540-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa272c540-5c, col_values=(('external_ids', {'iface-id': 'a272c540-5cec-4898-bfe5-aba42a319411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:4e:cf', 'vm-uuid': 'f03fc7b2-b000-4972-b1ba-904366ff4d34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:11 np0005481680 NetworkManager[44859]: <info>  [1760304731.5232] manager: (tapa272c540-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.533 2 INFO os_vif [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c')#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.627 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.628 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.628 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:3d:4e:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.629 2 INFO nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Using config drive#033[00m
Oct 12 17:32:11 np0005481680 nova_compute[264665]: 2025-10-12 21:32:11.669 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:12.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:32:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.314 2 DEBUG nova.network.neutron [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updated VIF entry in instance network info cache for port a272c540-5cec-4898-bfe5-aba42a319411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.315 2 DEBUG nova.network.neutron [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updating instance_info_cache with network_info: [{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.322 2 INFO nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Creating config drive at /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.331 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt5u7aqw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.362 2 DEBUG oslo_concurrency.lockutils [req-de813fef-1830-4fea-9c15-022161ae1c3d req-9ef3ecc3-100a-4f5e-b916-a31bd3eb9fbb 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.474 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvt5u7aqw" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.519 2 DEBUG nova.storage.rbd_utils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.524 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.731 2 DEBUG oslo_concurrency.processutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config f03fc7b2-b000-4972-b1ba-904366ff4d34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.732 2 INFO nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Deleting local config drive /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34/disk.config because it was imported into RBD.#033[00m
Oct 12 17:32:12 np0005481680 kernel: tapa272c540-5c: entered promiscuous mode
Oct 12 17:32:12 np0005481680 NetworkManager[44859]: <info>  [1760304732.7979] manager: (tapa272c540-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct 12 17:32:12 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:12Z|00077|binding|INFO|Claiming lport a272c540-5cec-4898-bfe5-aba42a319411 for this chassis.
Oct 12 17:32:12 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:12Z|00078|binding|INFO|a272c540-5cec-4898-bfe5-aba42a319411: Claiming fa:16:3e:3d:4e:cf 10.100.0.7
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.820 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:4e:cf 10.100.0.7'], port_security=['fa:16:3e:3d:4e:cf 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f03fc7b2-b000-4972-b1ba-904366ff4d34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-746f9f0d-c12a-426b-a872-a76f216aff44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53113194-7690-4bf3-ad5d-7355c514db99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=902c9f4c-9abd-4ab8-b558-68cf7f6fa39e, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=a272c540-5cec-4898-bfe5-aba42a319411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.821 164459 INFO neutron.agent.ovn.metadata.agent [-] Port a272c540-5cec-4898-bfe5-aba42a319411 in datapath 746f9f0d-c12a-426b-a872-a76f216aff44 bound to our chassis#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.823 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 746f9f0d-c12a-426b-a872-a76f216aff44#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.842 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[daeea21e-50e7-4669-a2fc-6928e88361ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.844 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap746f9f0d-c1 in ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.846 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap746f9f0d-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.847 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbe1ec1-d928-4522-a0b1-02820c5c32bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.848 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[802e00aa-4379-4901-8595-a1ecd31f3ba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 systemd[1]: Started Virtual Machine qemu-5-instance-0000000a.
Oct 12 17:32:12 np0005481680 systemd-machined[218338]: New machine qemu-5-instance-0000000a.
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.863 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[96d4bba3-e4a7-486a-a281-7a9b5cb8ff2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 systemd-udevd[283603]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.896 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[86480539-2928-45ff-b651-c82bbb4a6366]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:12 np0005481680 NetworkManager[44859]: <info>  [1760304732.9158] device (tapa272c540-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:32:12 np0005481680 NetworkManager[44859]: <info>  [1760304732.9172] device (tapa272c540-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:32:12 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:12Z|00079|binding|INFO|Setting lport a272c540-5cec-4898-bfe5-aba42a319411 ovn-installed in OVS
Oct 12 17:32:12 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:12Z|00080|binding|INFO|Setting lport a272c540-5cec-4898-bfe5-aba42a319411 up in Southbound
Oct 12 17:32:12 np0005481680 nova_compute[264665]: 2025-10-12 21:32:12.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:12 np0005481680 podman[283586]: 2025-10-12 21:32:12.935730216 +0000 UTC m=+0.100918750 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.948 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7b6105-c624-4129-87bc-54dd00a117e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.954 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[c357afcb-63c0-4ead-96c1-ef0b94c972f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:12 np0005481680 systemd-udevd[283615]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:32:12 np0005481680 NetworkManager[44859]: <info>  [1760304732.9567] manager: (tap746f9f0d-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Oct 12 17:32:12 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.996 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[658ab8ca-daef-40db-95c8-13bb11d83899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:12.999 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[026f414d-1e87-46a0-a9c5-7cd984b08cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 NetworkManager[44859]: <info>  [1760304733.0277] device (tap746f9f0d-c0): carrier: link connected
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.035 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[97707558-a466-4375-ab92-97bd39e6acc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.057 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[250adf85-3478-4dea-8b2d-72b509fa0921]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap746f9f0d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:b6:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439966, 'reachable_time': 35212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283640, 'error': None, 'target': 'ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.083 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f94897ff-3786-40d7-94ea-29447af3ccc1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:b6e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439966, 'tstamp': 439966}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283641, 'error': None, 'target': 'ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.110 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[cabe40aa-6c32-462f-b7f2-e768b7f1090d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap746f9f0d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:b6:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439966, 'reachable_time': 35212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283642, 'error': None, 'target': 'ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.164 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[a5513323-5764-4b05-8825-2f6af12362ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.268 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9c78fc-7b18-43af-935f-d899e3ac6f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.270 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap746f9f0d-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.271 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.272 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap746f9f0d-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:13 np0005481680 kernel: tap746f9f0d-c0: entered promiscuous mode
Oct 12 17:32:13 np0005481680 NetworkManager[44859]: <info>  [1760304733.2781] manager: (tap746f9f0d-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.278 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap746f9f0d-c0, col_values=(('external_ids', {'iface-id': '38ad6a85-0a42-4b1e-9621-16ee7baa8797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:13 np0005481680 nova_compute[264665]: 2025-10-12 21:32:13.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:13 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:13Z|00081|binding|INFO|Releasing lport 38ad6a85-0a42-4b1e-9621-16ee7baa8797 from this chassis (sb_readonly=0)
Oct 12 17:32:13 np0005481680 nova_compute[264665]: 2025-10-12 21:32:13.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.309 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/746f9f0d-c12a-426b-a872-a76f216aff44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/746f9f0d-c12a-426b-a872-a76f216aff44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.311 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[01c32271-8d9c-4e3f-8846-ebc0ea1753dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.312 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-746f9f0d-c12a-426b-a872-a76f216aff44
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/746f9f0d-c12a-426b-a872-a76f216aff44.pid.haproxy
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID 746f9f0d-c12a-426b-a872-a76f216aff44
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:32:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:13.315 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44', 'env', 'PROCESS_TAG=haproxy-746f9f0d-c12a-426b-a872-a76f216aff44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/746f9f0d-c12a-426b-a872-a76f216aff44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:32:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:32:13 np0005481680 podman[283718]: 2025-10-12 21:32:13.830151812 +0000 UTC m=+0.084293354 container create b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 12 17:32:13 np0005481680 podman[283718]: 2025-10-12 21:32:13.784310632 +0000 UTC m=+0.038452254 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:32:13 np0005481680 systemd[1]: Started libpod-conmon-b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4.scope.
Oct 12 17:32:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b1bd0a5bfc017fe5f4917b469ec42f4e7585bcd3b62cce66bce2545d68e51e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:13 np0005481680 podman[283718]: 2025-10-12 21:32:13.959142137 +0000 UTC m=+0.213283759 container init b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:32:13 np0005481680 podman[283718]: 2025-10-12 21:32:13.968960028 +0000 UTC m=+0.223101590 container start b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 12 17:32:14 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [NOTICE]   (283737) : New worker (283739) forked
Oct 12 17:32:14 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [NOTICE]   (283737) : Loading success.
Oct 12 17:32:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:14.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.029 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304734.0290914, f03fc7b2-b000-4972-b1ba-904366ff4d34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.030 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] VM Started (Lifecycle Event)#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.059 2 DEBUG nova.compute.manager [req-b1777648-07c4-4ad4-a9d5-6bf4ad004376 req-6e8f0887-acbb-414d-9316-7e103f42e8f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.060 2 DEBUG oslo_concurrency.lockutils [req-b1777648-07c4-4ad4-a9d5-6bf4ad004376 req-6e8f0887-acbb-414d-9316-7e103f42e8f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.060 2 DEBUG oslo_concurrency.lockutils [req-b1777648-07c4-4ad4-a9d5-6bf4ad004376 req-6e8f0887-acbb-414d-9316-7e103f42e8f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.061 2 DEBUG oslo_concurrency.lockutils [req-b1777648-07c4-4ad4-a9d5-6bf4ad004376 req-6e8f0887-acbb-414d-9316-7e103f42e8f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.061 2 DEBUG nova.compute.manager [req-b1777648-07c4-4ad4-a9d5-6bf4ad004376 req-6e8f0887-acbb-414d-9316-7e103f42e8f4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Processing event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.063 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.064 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.069 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.071 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.076 2 INFO nova.virt.libvirt.driver [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Instance spawned successfully.#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.078 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.095 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.096 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304734.029196, f03fc7b2-b000-4972-b1ba-904366ff4d34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.096 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.108 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.108 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.109 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.110 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.111 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.111 2 DEBUG nova.virt.libvirt.driver [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.120 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.125 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304734.068347, f03fc7b2-b000-4972-b1ba-904366ff4d34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.125 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.152 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.156 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.170 2 INFO nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.171 2 DEBUG nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.179 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.226 2 INFO nova.compute.manager [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Took 8.26 seconds to build instance.#033[00m
Oct 12 17:32:14 np0005481680 nova_compute[264665]: 2025-10-12 21:32:14.240 2 DEBUG oslo_concurrency.lockutils [None req-eb406f0b-7e43-4d2c-94d9-706db46040ac 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:32:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 12 17:32:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.6 MiB/s wr, 52 op/s
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:32:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:32:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.154 2 DEBUG nova.compute.manager [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.157 2 DEBUG oslo_concurrency.lockutils [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.158 2 DEBUG oslo_concurrency.lockutils [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.158 2 DEBUG oslo_concurrency.lockutils [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.158 2 DEBUG nova.compute.manager [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] No waiting events found dispatching network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.159 2 WARNING nova.compute.manager [req-712a94e1-7721-4a32-b845-cac5c0f7180a req-62faf3ed-6217-439c-973e-671045e13249 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received unexpected event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 for instance with vm_state active and task_state None.#033[00m
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:32:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:16 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:32:16 np0005481680 nova_compute[264665]: 2025-10-12 21:32:16.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.615295497 +0000 UTC m=+0.064407196 container create e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 17:32:16 np0005481680 systemd[1]: Started libpod-conmon-e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc.scope.
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.586778838 +0000 UTC m=+0.035890587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.7387346 +0000 UTC m=+0.187846289 container init e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.750342306 +0000 UTC m=+0.199454005 container start e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.754568754 +0000 UTC m=+0.203680483 container attach e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:32:16 np0005481680 suspicious_lamport[283939]: 167 167
Oct 12 17:32:16 np0005481680 systemd[1]: libpod-e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc.scope: Deactivated successfully.
Oct 12 17:32:16 np0005481680 podman[283922]: 2025-10-12 21:32:16.75872052 +0000 UTC m=+0.207832219 container died e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:32:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6661b43ae53504f1fedaa9fd0b918d5f55244e622c1c82e6d2420e213cf21f36-merged.mount: Deactivated successfully.
Oct 12 17:32:17 np0005481680 podman[283922]: 2025-10-12 21:32:17.011833586 +0000 UTC m=+0.460945295 container remove e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 17:32:17 np0005481680 systemd[1]: libpod-conmon-e0073bb048dd6ca1eca116338eb873e5d163560bccae027cdbc4ff307922d9dc.scope: Deactivated successfully.
Oct 12 17:32:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:17.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.316827797 +0000 UTC m=+0.102362957 container create ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.25664218 +0000 UTC m=+0.042177400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:17 np0005481680 systemd[1]: Started libpod-conmon-ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec.scope.
Oct 12 17:32:17 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:17 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.460871726 +0000 UTC m=+0.246406906 container init ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.472088752 +0000 UTC m=+0.257623882 container start ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.47940133 +0000 UTC m=+0.264936550 container attach ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:32:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 18 KiB/s wr, 14 op/s
Oct 12 17:32:17 np0005481680 nova_compute[264665]: 2025-10-12 21:32:17.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:17 np0005481680 NetworkManager[44859]: <info>  [1760304737.8450] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 12 17:32:17 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:17Z|00082|binding|INFO|Releasing lport 38ad6a85-0a42-4b1e-9621-16ee7baa8797 from this chassis (sb_readonly=0)
Oct 12 17:32:17 np0005481680 NetworkManager[44859]: <info>  [1760304737.8469] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 12 17:32:17 np0005481680 relaxed_dirac[283983]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:32:17 np0005481680 relaxed_dirac[283983]: --> All data devices are unavailable
Oct 12 17:32:17 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:17Z|00083|binding|INFO|Releasing lport 38ad6a85-0a42-4b1e-9621-16ee7baa8797 from this chassis (sb_readonly=0)
Oct 12 17:32:17 np0005481680 nova_compute[264665]: 2025-10-12 21:32:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:17 np0005481680 systemd[1]: libpod-ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec.scope: Deactivated successfully.
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.903604676 +0000 UTC m=+0.689139846 container died ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:32:17 np0005481680 nova_compute[264665]: 2025-10-12 21:32:17.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e2e20ecd20aaf0b8ba9375225c49537cf108bd0592343c14c2c507ee62294969-merged.mount: Deactivated successfully.
Oct 12 17:32:17 np0005481680 podman[283965]: 2025-10-12 21:32:17.968642297 +0000 UTC m=+0.754177427 container remove ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_dirac, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:32:18 np0005481680 systemd[1]: libpod-conmon-ccf4db8c594c11af87e9979862e1b9fc99210df86cc677162e9898621e2ce9ec.scope: Deactivated successfully.
Oct 12 17:32:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:18.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:18 np0005481680 nova_compute[264665]: 2025-10-12 21:32:18.119 2 DEBUG nova.compute.manager [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-changed-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:18 np0005481680 nova_compute[264665]: 2025-10-12 21:32:18.120 2 DEBUG nova.compute.manager [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing instance network info cache due to event network-changed-a272c540-5cec-4898-bfe5-aba42a319411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:32:18 np0005481680 nova_compute[264665]: 2025-10-12 21:32:18.121 2 DEBUG oslo_concurrency.lockutils [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:32:18 np0005481680 nova_compute[264665]: 2025-10-12 21:32:18.121 2 DEBUG oslo_concurrency.lockutils [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:32:18 np0005481680 nova_compute[264665]: 2025-10-12 21:32:18.121 2 DEBUG nova.network.neutron [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing network info cache for port a272c540-5cec-4898-bfe5-aba42a319411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:32:18
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', '.nfs']
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:32:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:32:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:32:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:18.369 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:18.370 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:18.372 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.829213629 +0000 UTC m=+0.074295028 container create 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:32:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:18.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:32:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:18 np0005481680 systemd[1]: Started libpod-conmon-83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689.scope.
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.802538238 +0000 UTC m=+0.047619687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:18 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.9263437 +0000 UTC m=+0.171425139 container init 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.93570342 +0000 UTC m=+0.180784779 container start 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.938894701 +0000 UTC m=+0.183976150 container attach 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:32:18 np0005481680 charming_ganguly[284120]: 167 167
Oct 12 17:32:18 np0005481680 systemd[1]: libpod-83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689.scope: Deactivated successfully.
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.944106884 +0000 UTC m=+0.189188303 container died 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:32:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:32:18 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9e03ba99ee5bf0866241ffcada0f2c6bea8ff661cd63817ead730e1e933cafa3-merged.mount: Deactivated successfully.
Oct 12 17:32:18 np0005481680 podman[284104]: 2025-10-12 21:32:18.995936588 +0000 UTC m=+0.241017977 container remove 83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_ganguly, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:32:19 np0005481680 systemd[1]: libpod-conmon-83f892fb3e153d926fd1a3a1f03526569bc6fdde06e6da500a1b312da412c689.scope: Deactivated successfully.
Oct 12 17:32:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.262257571 +0000 UTC m=+0.067428833 container create 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:32:19 np0005481680 systemd[1]: Started libpod-conmon-9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1.scope.
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.229585427 +0000 UTC m=+0.034756749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:19 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92bcaecb45c78393cad6f96e98ebebd198d8e7cc9ff36864f4e8e3b529e26335/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92bcaecb45c78393cad6f96e98ebebd198d8e7cc9ff36864f4e8e3b529e26335/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92bcaecb45c78393cad6f96e98ebebd198d8e7cc9ff36864f4e8e3b529e26335/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:19 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92bcaecb45c78393cad6f96e98ebebd198d8e7cc9ff36864f4e8e3b529e26335/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.357928244 +0000 UTC m=+0.163099526 container init 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.382125063 +0000 UTC m=+0.187296325 container start 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.38752457 +0000 UTC m=+0.192695822 container attach 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:32:19 np0005481680 podman[284161]: 2025-10-12 21:32:19.43445333 +0000 UTC m=+0.115183973 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]: {
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:    "0": [
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:        {
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "devices": [
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "/dev/loop3"
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            ],
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "lv_name": "ceph_lv0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "lv_size": "21470642176",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "name": "ceph_lv0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "tags": {
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.cluster_name": "ceph",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.crush_device_class": "",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.encrypted": "0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.osd_id": "0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.type": "block",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.vdo": "0",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:                "ceph.with_tpm": "0"
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            },
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "type": "block",
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:            "vg_name": "ceph_vg0"
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:        }
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]:    ]
Oct 12 17:32:19 np0005481680 naughty_volhard[284164]: }
Oct 12 17:32:19 np0005481680 systemd[1]: libpod-9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1.scope: Deactivated successfully.
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.713655611 +0000 UTC m=+0.518826883 container died 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:32:19 np0005481680 systemd[1]: var-lib-containers-storage-overlay-92bcaecb45c78393cad6f96e98ebebd198d8e7cc9ff36864f4e8e3b529e26335-merged.mount: Deactivated successfully.
Oct 12 17:32:19 np0005481680 podman[284146]: 2025-10-12 21:32:19.781742791 +0000 UTC m=+0.586914053 container remove 9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_volhard, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:32:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 18 KiB/s wr, 105 op/s
Oct 12 17:32:19 np0005481680 systemd[1]: libpod-conmon-9a143fe87ac9c4390751a1949c7ca6c5188a8b7c9b4ce6ba8519a70f95be45a1.scope: Deactivated successfully.
Oct 12 17:32:19 np0005481680 nova_compute[264665]: 2025-10-12 21:32:19.869 2 DEBUG nova.network.neutron [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updated VIF entry in instance network info cache for port a272c540-5cec-4898-bfe5-aba42a319411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:32:19 np0005481680 nova_compute[264665]: 2025-10-12 21:32:19.871 2 DEBUG nova.network.neutron [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updating instance_info_cache with network_info: [{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:32:19 np0005481680 nova_compute[264665]: 2025-10-12 21:32:19.888 2 DEBUG oslo_concurrency.lockutils [req-350e6558-4787-4005-b791-857ee5b534ce req-209ad123-7547-45e6-b24f-39d1190def28 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:32:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:32:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:20.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.576216384 +0000 UTC m=+0.073111738 container create b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:32:20 np0005481680 systemd[1]: Started libpod-conmon-b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2.scope.
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.542640177 +0000 UTC m=+0.039535591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.696037906 +0000 UTC m=+0.192933270 container init b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.707107798 +0000 UTC m=+0.204003152 container start b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:32:20 np0005481680 practical_wilbur[284310]: 167 167
Oct 12 17:32:20 np0005481680 systemd[1]: libpod-b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2.scope: Deactivated successfully.
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.722289376 +0000 UTC m=+0.219184750 container attach b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.722797209 +0000 UTC m=+0.219692563 container died b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:32:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-dabd782801ca4558f5bc0a149151fae333519750aee0a63cb04cdd5fe640baf5-merged.mount: Deactivated successfully.
Oct 12 17:32:20 np0005481680 podman[284294]: 2025-10-12 21:32:20.777343682 +0000 UTC m=+0.274239036 container remove b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:32:20 np0005481680 systemd[1]: libpod-conmon-b7d0d2e588aeb3e2183d7af0195a8556d7478d26502e8eedda6cc4fb644a59e2.scope: Deactivated successfully.
Oct 12 17:32:21 np0005481680 podman[284334]: 2025-10-12 21:32:21.047008391 +0000 UTC m=+0.076202478 container create aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:32:21 np0005481680 podman[284334]: 2025-10-12 21:32:21.01292326 +0000 UTC m=+0.042117397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:32:21 np0005481680 systemd[1]: Started libpod-conmon-aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a.scope.
Oct 12 17:32:21 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:32:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988a7067200091f4fe30a0186af3b4c926ddb0a877c5bee26574f6952b64d7b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988a7067200091f4fe30a0186af3b4c926ddb0a877c5bee26574f6952b64d7b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988a7067200091f4fe30a0186af3b4c926ddb0a877c5bee26574f6952b64d7b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:21 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988a7067200091f4fe30a0186af3b4c926ddb0a877c5bee26574f6952b64d7b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:32:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:21 np0005481680 podman[284334]: 2025-10-12 21:32:21.171172272 +0000 UTC m=+0.200366369 container init aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:32:21 np0005481680 podman[284334]: 2025-10-12 21:32:21.185331384 +0000 UTC m=+0.214525461 container start aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:32:21 np0005481680 podman[284334]: 2025-10-12 21:32:21.189585613 +0000 UTC m=+0.218779710 container attach aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 12 17:32:21 np0005481680 nova_compute[264665]: 2025-10-12 21:32:21.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:21 np0005481680 nova_compute[264665]: 2025-10-12 21:32:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 18 KiB/s wr, 105 op/s
Oct 12 17:32:22 np0005481680 lvm[284428]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:32:22 np0005481680 lvm[284428]: VG ceph_vg0 finished
Oct 12 17:32:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:22] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 12 17:32:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:22] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 12 17:32:22 np0005481680 bold_wozniak[284351]: {}
Oct 12 17:32:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:22.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:22 np0005481680 systemd[1]: libpod-aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a.scope: Deactivated successfully.
Oct 12 17:32:22 np0005481680 systemd[1]: libpod-aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a.scope: Consumed 1.522s CPU time.
Oct 12 17:32:22 np0005481680 podman[284334]: 2025-10-12 21:32:22.079359171 +0000 UTC m=+1.108553248 container died aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:32:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-988a7067200091f4fe30a0186af3b4c926ddb0a877c5bee26574f6952b64d7b2-merged.mount: Deactivated successfully.
Oct 12 17:32:22 np0005481680 podman[284334]: 2025-10-12 21:32:22.151848613 +0000 UTC m=+1.181042690 container remove aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:32:22 np0005481680 systemd[1]: libpod-conmon-aac9321aa60f90e6944104387a34c9a3706fecb88b0b32ea6fde5bbae770ed6a.scope: Deactivated successfully.
Oct 12 17:32:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:32:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:32:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:23 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:32:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 91 op/s
Oct 12 17:32:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:24.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Oct 12 17:32:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:32:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:32:26 np0005481680 nova_compute[264665]: 2025-10-12 21:32:26.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:26 np0005481680 nova_compute[264665]: 2025-10-12 21:32:26.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:27.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:27.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 12 17:32:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:28.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:28 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:28Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:4e:cf 10.100.0.7
Oct 12 17:32:28 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:28Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:4e:cf 10.100.0.7
Oct 12 17:32:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:28.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 12 17:32:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:30.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:31 np0005481680 nova_compute[264665]: 2025-10-12 21:32:31.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:31 np0005481680 nova_compute[264665]: 2025-10-12 21:32:31.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:32:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:32] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 12 17:32:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:32] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct 12 17:32:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:32.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:33 np0005481680 podman[284503]: 2025-10-12 21:32:33.154210007 +0000 UTC m=+0.102656292 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:32:33 np0005481680 podman[284504]: 2025-10-12 21:32:33.176998276 +0000 UTC m=+0.122668451 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 12 17:32:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:33.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:32:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:32:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:32:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:34.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:34 np0005481680 nova_compute[264665]: 2025-10-12 21:32:34.599 2 INFO nova.compute.manager [None req-b141beb3-2877-479c-b175-95fadc491958 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Get console output#033[00m
Oct 12 17:32:34 np0005481680 nova_compute[264665]: 2025-10-12 21:32:34.604 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:32:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:32:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:36.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:36 np0005481680 nova_compute[264665]: 2025-10-12 21:32:36.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:36 np0005481680 nova_compute[264665]: 2025-10-12 21:32:36.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:36 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:36Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:4e:cf 10.100.0.7
Oct 12 17:32:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:32:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:37.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:32:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:37.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:32:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:38 np0005481680 nova_compute[264665]: 2025-10-12 21:32:38.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:39.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 12 17:32:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:40Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:4e:cf 10.100.0.7
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.658 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.705 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.706 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.706 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.771 2 DEBUG nova.compute.manager [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-changed-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.771 2 DEBUG nova.compute.manager [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing instance network info cache due to event network-changed-a272c540-5cec-4898-bfe5-aba42a319411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.772 2 DEBUG oslo_concurrency.lockutils [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.772 2 DEBUG oslo_concurrency.lockutils [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.773 2 DEBUG nova.network.neutron [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Refreshing network info cache for port a272c540-5cec-4898-bfe5-aba42a319411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.826 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.827 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.827 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.827 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.828 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.830 2 INFO nova.compute.manager [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Terminating instance#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.831 2 DEBUG nova.compute.manager [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:32:40 np0005481680 kernel: tapa272c540-5c (unregistering): left promiscuous mode
Oct 12 17:32:40 np0005481680 NetworkManager[44859]: <info>  [1760304760.9052] device (tapa272c540-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:32:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:40Z|00084|binding|INFO|Releasing lport a272c540-5cec-4898-bfe5-aba42a319411 from this chassis (sb_readonly=0)
Oct 12 17:32:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:40Z|00085|binding|INFO|Setting lport a272c540-5cec-4898-bfe5-aba42a319411 down in Southbound
Oct 12 17:32:40 np0005481680 ovn_controller[154617]: 2025-10-12T21:32:40Z|00086|binding|INFO|Removing iface tapa272c540-5c ovn-installed in OVS
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:40.926 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:4e:cf 10.100.0.7'], port_security=['fa:16:3e:3d:4e:cf 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f03fc7b2-b000-4972-b1ba-904366ff4d34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-746f9f0d-c12a-426b-a872-a76f216aff44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53113194-7690-4bf3-ad5d-7355c514db99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=902c9f4c-9abd-4ab8-b558-68cf7f6fa39e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=a272c540-5cec-4898-bfe5-aba42a319411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:32:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:40.928 164459 INFO neutron.agent.ovn.metadata.agent [-] Port a272c540-5cec-4898-bfe5-aba42a319411 in datapath 746f9f0d-c12a-426b-a872-a76f216aff44 unbound from our chassis#033[00m
Oct 12 17:32:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:40.931 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 746f9f0d-c12a-426b-a872-a76f216aff44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:32:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:40.932 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f35ffede-5a5a-40a9-a2d7-20da79bedf6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:40 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:40.933 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44 namespace which is not needed anymore#033[00m
Oct 12 17:32:40 np0005481680 nova_compute[264665]: 2025-10-12 21:32:40.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:40 np0005481680 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 12 17:32:40 np0005481680 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Consumed 15.572s CPU time.
Oct 12 17:32:40 np0005481680 systemd-machined[218338]: Machine qemu-5-instance-0000000a terminated.
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.071 2 INFO nova.virt.libvirt.driver [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Instance destroyed successfully.#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.073 2 DEBUG nova.objects.instance [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid f03fc7b2-b000-4972-b1ba-904366ff4d34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.094 2 DEBUG nova.virt.libvirt.vif [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-330538359',display_name='tempest-TestNetworkBasicOps-server-330538359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-330538359',id=10,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEADZUwFVMekygIqAVS23ATWsF5c/ODqFeOdQSvml1oe4ZtKGWFL/PNXhSuam4gmYc/NHW88We3OwxB2B/MwQg+FIx20xpFZ9S9n4lg5X4Nc9WgPBdrw4vCWowpc/0tUWA==',key_name='tempest-TestNetworkBasicOps-1938901682',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:32:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-e0qpinvi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:32:14Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=f03fc7b2-b000-4972-b1ba-904366ff4d34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.095 2 DEBUG nova.network.os_vif_util [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.101 2 DEBUG nova.network.os_vif_util [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.101 2 DEBUG os_vif [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa272c540-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.115 2 INFO os_vif [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:4e:cf,bridge_name='br-int',has_traffic_filtering=True,id=a272c540-5cec-4898-bfe5-aba42a319411,network=Network(746f9f0d-c12a-426b-a872-a76f216aff44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa272c540-5c')#033[00m
Oct 12 17:32:41 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [NOTICE]   (283737) : haproxy version is 2.8.14-c23fe91
Oct 12 17:32:41 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [NOTICE]   (283737) : path to executable is /usr/sbin/haproxy
Oct 12 17:32:41 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [ALERT]    (283737) : Current worker (283739) exited with code 143 (Terminated)
Oct 12 17:32:41 np0005481680 neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44[283733]: [WARNING]  (283737) : All workers exited. Exiting... (0)
Oct 12 17:32:41 np0005481680 systemd[1]: libpod-b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4.scope: Deactivated successfully.
Oct 12 17:32:41 np0005481680 podman[284606]: 2025-10-12 21:32:41.164630024 +0000 UTC m=+0.082138481 container stop b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 12 17:32:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:32:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537774745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:32:41 np0005481680 podman[284606]: 2025-10-12 21:32:41.190643766 +0000 UTC m=+0.108152243 container died b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:32:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:32:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.207 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4-userdata-shm.mount: Deactivated successfully.
Oct 12 17:32:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f7b1bd0a5bfc017fe5f4917b469ec42f4e7585bcd3b62cce66bce2545d68e51e-merged.mount: Deactivated successfully.
Oct 12 17:32:41 np0005481680 podman[284606]: 2025-10-12 21:32:41.258905531 +0000 UTC m=+0.176414018 container cleanup b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:32:41 np0005481680 systemd[1]: libpod-conmon-b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4.scope: Deactivated successfully.
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.284 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.284 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:32:41 np0005481680 podman[284664]: 2025-10-12 21:32:41.38819277 +0000 UTC m=+0.089583480 container remove b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.399 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2c6e08-30cd-4392-bd4d-0e0229eb91a7]: (4, ('Sun Oct 12 09:32:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44 (b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4)\nb9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4\nSun Oct 12 09:32:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44 (b9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4)\nb9c4d302b34d0c8157f478ed07c098f1c511fae1aa0c21e390f3a87085d46be4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.401 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf805db-8a01-4d8f-a382-b813aa1163bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.403 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap746f9f0d-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 kernel: tap746f9f0d-c0: left promiscuous mode
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.415 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[847b7142-a34a-4a27-a1f3-b8c37a2ff5e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.442 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ca488003-3c7f-4dc8-b66c-98269e9891d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.443 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[46a786af-e794-4b71-a41e-d05e7e1513ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.473 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5706ac-159a-4898-b9f8-cfb256537a02]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439957, 'reachable_time': 28489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284681, 'error': None, 'target': 'ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.476 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-746f9f0d-c12a-426b-a872-a76f216aff44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:32:41 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:41.476 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cfca2b-3ade-40b6-a055-203f3298897a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:32:41 np0005481680 systemd[1]: run-netns-ovnmeta\x2d746f9f0d\x2dc12a\x2d426b\x2da872\x2da76f216aff44.mount: Deactivated successfully.
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.581 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.583 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4561MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.584 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.584 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.676 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance f03fc7b2-b000-4972-b1ba-904366ff4d34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.677 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.677 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.693 2 INFO nova.virt.libvirt.driver [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Deleting instance files /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34_del#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.694 2 INFO nova.virt.libvirt.driver [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Deletion of /var/lib/nova/instances/f03fc7b2-b000-4972-b1ba-904366ff4d34_del complete#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.725 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.771 2 INFO nova.compute.manager [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.772 2 DEBUG oslo.service.loopingcall [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.773 2 DEBUG nova.compute.manager [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:32:41 np0005481680 nova_compute[264665]: 2025-10-12 21:32:41.774 2 DEBUG nova.network.neutron [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:32:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 12 17:32:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:42] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct 12 17:32:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:42] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Oct 12 17:32:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:32:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:32:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:32:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145103677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.223 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.231 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.254 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.288 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.289 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.625 2 DEBUG nova.network.neutron [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.663 2 INFO nova.compute.manager [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.721 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.722 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.790 2 DEBUG oslo_concurrency.processutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.862 2 DEBUG nova.network.neutron [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updated VIF entry in instance network info cache for port a272c540-5cec-4898-bfe5-aba42a319411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.863 2 DEBUG nova.network.neutron [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Updating instance_info_cache with network_info: [{"id": "a272c540-5cec-4898-bfe5-aba42a319411", "address": "fa:16:3e:3d:4e:cf", "network": {"id": "746f9f0d-c12a-426b-a872-a76f216aff44", "bridge": "br-int", "label": "tempest-network-smoke--1666593216", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa272c540-5c", "ovs_interfaceid": "a272c540-5cec-4898-bfe5-aba42a319411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.869 2 DEBUG nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-vif-unplugged-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.870 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.871 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.871 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.871 2 DEBUG nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] No waiting events found dispatching network-vif-unplugged-a272c540-5cec-4898-bfe5-aba42a319411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.872 2 WARNING nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received unexpected event network-vif-unplugged-a272c540-5cec-4898-bfe5-aba42a319411 for instance with vm_state deleted and task_state None.#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.872 2 DEBUG nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.873 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.873 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.873 2 DEBUG oslo_concurrency.lockutils [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.874 2 DEBUG nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] No waiting events found dispatching network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.874 2 WARNING nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received unexpected event network-vif-plugged-a272c540-5cec-4898-bfe5-aba42a319411 for instance with vm_state deleted and task_state None.#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.874 2 DEBUG nova.compute.manager [req-15d25bf3-436e-4769-a1b8-8c81e85086ea req-d0296cdb-bc61-492f-99de-bdb4673bd9e6 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Received event network-vif-deleted-a272c540-5cec-4898-bfe5-aba42a319411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:32:42 np0005481680 nova_compute[264665]: 2025-10-12 21:32:42.887 2 DEBUG oslo_concurrency.lockutils [req-479d445c-9ab0-445c-8480-a65b308b64e7 req-507c0c54-7ba1-46a8-901f-df2650686066 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-f03fc7b2-b000-4972-b1ba-904366ff4d34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:32:43 np0005481680 podman[284725]: 2025-10-12 21:32:43.15202094 +0000 UTC m=+0.113322943 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:32:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:43.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.290 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.290 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.291 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.291 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.291 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:32:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:32:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167890216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.366 2 DEBUG oslo_concurrency.processutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.374 2 DEBUG nova.compute.provider_tree [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.392 2 DEBUG nova.scheduler.client.report [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.411 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.438 2 INFO nova.scheduler.client.report [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance f03fc7b2-b000-4972-b1ba-904366ff4d34#033[00m
Oct 12 17:32:43 np0005481680 nova_compute[264665]: 2025-10-12 21:32:43.498 2 DEBUG oslo_concurrency.lockutils [None req-0cd0e7e3-8605-47a1-bbb7-89d2d54200f4 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "f03fc7b2-b000-4972-b1ba-904366ff4d34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:32:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 12 17:32:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:32:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:32:44 np0005481680 nova_compute[264665]: 2025-10-12 21:32:44.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:45.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 12 17:32:45 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:45.948 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:32:45 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:45.949 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:32:45 np0005481680 nova_compute[264665]: 2025-10-12 21:32:45.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:46 np0005481680 nova_compute[264665]: 2025-10-12 21:32:46.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:46 np0005481680 nova_compute[264665]: 2025-10-12 21:32:46.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:47.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:47.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:47 np0005481680 nova_compute[264665]: 2025-10-12 21:32:47.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:47 np0005481680 nova_compute[264665]: 2025-10-12 21:32:47.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:32:47 np0005481680 nova_compute[264665]: 2025-10-12 21:32:47.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:32:47 np0005481680 nova_compute[264665]: 2025-10-12 21:32:47.678 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:32:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Oct 12 17:32:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:48 np0005481680 nova_compute[264665]: 2025-10-12 21:32:48.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:48 np0005481680 nova_compute[264665]: 2025-10-12 21:32:48.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:32:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3832692550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:32:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3832692550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:32:48 np0005481680 nova_compute[264665]: 2025-10-12 21:32:48.675 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:32:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:32:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:49.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:32:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Oct 12 17:32:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:50.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:50 np0005481680 podman[284785]: 2025-10-12 21:32:50.114232636 +0000 UTC m=+0.072336871 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:32:51 np0005481680 nova_compute[264665]: 2025-10-12 21:32:51.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:51.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:51 np0005481680 nova_compute[264665]: 2025-10-12 21:32:51.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 12 17:32:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:32:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:32:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:32:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 12 17:32:53 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:32:53.952 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:32:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:54.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:32:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 12 17:32:56 np0005481680 nova_compute[264665]: 2025-10-12 21:32:56.066 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304761.065481, f03fc7b2-b000-4972-b1ba-904366ff4d34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:32:56 np0005481680 nova_compute[264665]: 2025-10-12 21:32:56.067 2 INFO nova.compute.manager [-] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:32:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:56 np0005481680 nova_compute[264665]: 2025-10-12 21:32:56.094 2 DEBUG nova.compute.manager [None req-f33c80a7-ae78-4b27-9dd4-ac08ef6b2804 - - - - - -] [instance: f03fc7b2-b000-4972-b1ba-904366ff4d34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:32:56 np0005481680 nova_compute[264665]: 2025-10-12 21:32:56.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:56 np0005481680 nova_compute[264665]: 2025-10-12 21:32:56.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:32:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:57.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:57.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:32:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:32:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:32:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:58.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:32:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:32:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:32:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:32:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:32:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:32:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:32:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:32:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:00 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 12 17:33:00 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:00.988106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:33:00 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 12 17:33:00 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304780988184, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2128, "num_deletes": 251, "total_data_size": 4155546, "memory_usage": 4215344, "flush_reason": "Manual Compaction"}
Oct 12 17:33:00 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304781008596, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4011864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29422, "largest_seqno": 31548, "table_properties": {"data_size": 4002372, "index_size": 5922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20001, "raw_average_key_size": 20, "raw_value_size": 3983288, "raw_average_value_size": 4081, "num_data_blocks": 254, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304580, "oldest_key_time": 1760304580, "file_creation_time": 1760304780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 20533 microseconds, and 12063 cpu microseconds.
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.008651) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4011864 bytes OK
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.008676) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.012212) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.012230) EVENT_LOG_v1 {"time_micros": 1760304781012224, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.012258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4146915, prev total WAL file size 4146915, number of live WAL files 2.
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.013658) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3917KB)], [65(11MB)]
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304781013821, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 15959384, "oldest_snapshot_seqno": -1}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6177 keys, 13861200 bytes, temperature: kUnknown
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304781121585, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13861200, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13820889, "index_size": 23753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 158262, "raw_average_key_size": 25, "raw_value_size": 13710584, "raw_average_value_size": 2219, "num_data_blocks": 955, "num_entries": 6177, "num_filter_entries": 6177, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.122156) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13861200 bytes
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.124998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 128.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(7.4) write-amplify(3.5) OK, records in: 6698, records dropped: 521 output_compression: NoCompression
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.125036) EVENT_LOG_v1 {"time_micros": 1760304781125025, "job": 36, "event": "compaction_finished", "compaction_time_micros": 107948, "compaction_time_cpu_micros": 50627, "output_level": 6, "num_output_files": 1, "total_output_size": 13861200, "num_input_records": 6698, "num_output_records": 6177, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304781126217, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304781128832, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.013566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.128887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.128892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.128893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.128895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:33:01.128897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:33:01 np0005481680 nova_compute[264665]: 2025-10-12 21:33:01.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:01 np0005481680 nova_compute[264665]: 2025-10-12 21:33:01.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:33:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:33:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:33:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.283 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.283 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.300 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.392 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.393 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.402 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.402 2 INFO nova.compute.claims [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.516 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:02 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:33:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277280399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.974 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.983 2 DEBUG nova.compute.provider_tree [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:33:02 np0005481680 nova_compute[264665]: 2025-10-12 21:33:02.997 2 DEBUG nova.scheduler.client.report [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.020 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.021 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.065 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.066 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.088 2 INFO nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.107 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.199 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.200 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.200 2 INFO nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Creating image(s)#033[00m
Oct 12 17:33:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.233 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.270 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.303 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.308 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:33:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.395 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.396 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "7497bb5386651df92e6b6f594b508b7cfd59032d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.397 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.397 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "7497bb5386651df92e6b6f594b508b7cfd59032d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.431 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.435 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:33:03 np0005481680 nova_compute[264665]: 2025-10-12 21:33:03.990 2 DEBUG nova.policy [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '935f7ca5b6aa4bff9c9b406ff9cf8dc3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '996cf7b314dd4598812dc5b6cda29b64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 12 17:33:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:04.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:04 np0005481680 podman[284936]: 2025-10-12 21:33:04.113227184 +0000 UTC m=+0.066013950 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 12 17:33:04 np0005481680 podman[284937]: 2025-10-12 21:33:04.169654029 +0000 UTC m=+0.117647393 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.407 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7497bb5386651df92e6b6f594b508b7cfd59032d d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.972s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.518 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] resizing rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.674 2 DEBUG nova.objects.instance [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'migration_context' on Instance uuid d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.694 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.695 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Ensure instance console log exists: /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.696 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.696 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:04 np0005481680 nova_compute[264665]: 2025-10-12 21:33:04.697 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:05 np0005481680 nova_compute[264665]: 2025-10-12 21:33:05.217 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Successfully created port: 56287bae-33ab-4007-8c88-0adeea38f1fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 12 17:33:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Oct 12 17:33:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:06.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.918 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Successfully updated port: 56287bae-33ab-4007-8c88-0adeea38f1fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.935 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.935 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:06 np0005481680 nova_compute[264665]: 2025-10-12 21:33:06.936 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.020 2 DEBUG nova.compute.manager [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.021 2 DEBUG nova.compute.manager [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing instance network info cache due to event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.021 2 DEBUG oslo_concurrency.lockutils [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.066 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 12 17:33:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:07.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 88 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.811 2 DEBUG nova.network.neutron [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.851 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.852 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Instance network_info: |[{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.852 2 DEBUG oslo_concurrency.lockutils [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.853 2 DEBUG nova.network.neutron [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.858 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Start _get_guest_xml network_info=[{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'disk_bus': 'virtio', 'encrypted': False, 'image_id': '0838cede-7f25-4ac2-ae16-04e86e2d6b46'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.866 2 WARNING nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.871 2 DEBUG nova.virt.libvirt.host [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.872 2 DEBUG nova.virt.libvirt.host [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.882 2 DEBUG nova.virt.libvirt.host [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.883 2 DEBUG nova.virt.libvirt.host [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.883 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.884 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-12T21:22:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eb33ea4e-2672-45dd-9a0e-ccb54873bf70',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-12T21:22:52Z,direct_url=<?>,disk_format='qcow2',id=0838cede-7f25-4ac2-ae16-04e86e2d6b46,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e256cf69486e4f8b98a8da7fd5db38a5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-12T21:22:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.885 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.886 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.886 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.886 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.887 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.887 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.888 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.888 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.889 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.889 2 DEBUG nova.virt.hardware [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 12 17:33:07 np0005481680 nova_compute[264665]: 2025-10-12 21:33:07.894 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:08.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:33:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042396879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.367 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.404 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.410 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:08.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:08 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 12 17:33:08 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071202835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.953 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.955 2 DEBUG nova.virt.libvirt.vif [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417374243',display_name='tempest-TestNetworkBasicOps-server-417374243',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417374243',id=11,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZc3He0shAPkOombcjIUGdP9n1u80HjNEPh6T4ZbjB/U75NhThD8XjiO3TIYuOBcapxnIe10ozz2IXBzeuKlp5zNZh7B6bxabbbz46S6IB5hJcME+xFC5Abfq2h8a/4jw==',key_name='tempest-TestNetworkBasicOps-1268665234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-kgruatmz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:33:03Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=d4877f49-ddd8-47a2-9a2f-6c2e26c9f401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.956 2 DEBUG nova.network.os_vif_util [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.957 2 DEBUG nova.network.os_vif_util [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.958 2 DEBUG nova.objects.instance [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.973 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] End _get_guest_xml xml=<domain type="kvm">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <uuid>d4877f49-ddd8-47a2-9a2f-6c2e26c9f401</uuid>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <name>instance-0000000b</name>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <memory>131072</memory>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <vcpu>1</vcpu>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <metadata>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:name>tempest-TestNetworkBasicOps-server-417374243</nova:name>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:creationTime>2025-10-12 21:33:07</nova:creationTime>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:flavor name="m1.nano">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:memory>128</nova:memory>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:disk>1</nova:disk>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:swap>0</nova:swap>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:ephemeral>0</nova:ephemeral>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:vcpus>1</nova:vcpus>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </nova:flavor>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:owner>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:user uuid="935f7ca5b6aa4bff9c9b406ff9cf8dc3">tempest-TestNetworkBasicOps-977144451-project-member</nova:user>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:project uuid="996cf7b314dd4598812dc5b6cda29b64">tempest-TestNetworkBasicOps-977144451</nova:project>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </nova:owner>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:root type="image" uuid="0838cede-7f25-4ac2-ae16-04e86e2d6b46"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <nova:ports>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <nova:port uuid="56287bae-33ab-4007-8c88-0adeea38f1fd">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        </nova:port>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </nova:ports>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </nova:instance>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </metadata>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <sysinfo type="smbios">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <system>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="manufacturer">RDO</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="product">OpenStack Compute</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="serial">d4877f49-ddd8-47a2-9a2f-6c2e26c9f401</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="uuid">d4877f49-ddd8-47a2-9a2f-6c2e26c9f401</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <entry name="family">Virtual Machine</entry>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </system>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </sysinfo>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <os>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <boot dev="hd"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <smbios mode="sysinfo"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </os>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <features>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <acpi/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <apic/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <vmcoreinfo/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </features>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <clock offset="utc">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <timer name="pit" tickpolicy="delay"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <timer name="hpet" present="no"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </clock>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <cpu mode="host-model" match="exact">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <topology sockets="1" cores="1" threads="1"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </cpu>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  <devices>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <disk type="network" device="disk">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <target dev="vda" bus="virtio"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <disk type="network" device="cdrom">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <driver type="raw" cache="none"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <source protocol="rbd" name="vms/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.100" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.102" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <host name="192.168.122.101" port="6789"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </source>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <auth username="openstack">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:        <secret type="ceph" uuid="5adb8c35-1b74-5730-a252-62321f654cd5"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      </auth>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <target dev="sda" bus="sata"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </disk>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <interface type="ethernet">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <mac address="fa:16:3e:45:d6:1e"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <driver name="vhost" rx_queue_size="512"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <mtu size="1442"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <target dev="tap56287bae-33"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </interface>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <serial type="pty">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <log file="/var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/console.log" append="off"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </serial>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <video>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <model type="virtio"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </video>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <input type="tablet" bus="usb"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <rng model="virtio">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <backend model="random">/dev/urandom</backend>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </rng>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="pci" model="pcie-root-port"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <controller type="usb" index="0"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    <memballoon model="virtio">
Oct 12 17:33:08 np0005481680 nova_compute[264665]:      <stats period="10"/>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:    </memballoon>
Oct 12 17:33:08 np0005481680 nova_compute[264665]:  </devices>
Oct 12 17:33:08 np0005481680 nova_compute[264665]: </domain>
Oct 12 17:33:08 np0005481680 nova_compute[264665]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.975 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Preparing to wait for external event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.975 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.975 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.975 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.976 2 DEBUG nova.virt.libvirt.vif [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-12T21:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417374243',display_name='tempest-TestNetworkBasicOps-server-417374243',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417374243',id=11,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZc3He0shAPkOombcjIUGdP9n1u80HjNEPh6T4ZbjB/U75NhThD8XjiO3TIYuOBcapxnIe10ozz2IXBzeuKlp5zNZh7B6bxabbbz46S6IB5hJcME+xFC5Abfq2h8a/4jw==',key_name='tempest-TestNetworkBasicOps-1268665234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-kgruatmz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-12T21:33:03Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=d4877f49-ddd8-47a2-9a2f-6c2e26c9f401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.977 2 DEBUG nova.network.os_vif_util [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.977 2 DEBUG nova.network.os_vif_util [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.978 2 DEBUG os_vif [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56287bae-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56287bae-33, col_values=(('external_ids', {'iface-id': '56287bae-33ab-4007-8c88-0adeea38f1fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:d6:1e', 'vm-uuid': 'd4877f49-ddd8-47a2-9a2f-6c2e26c9f401'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:08 np0005481680 NetworkManager[44859]: <info>  [1760304788.9864] manager: (tap56287bae-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:08 np0005481680 nova_compute[264665]: 2025-10-12 21:33:08.996 2 INFO os_vif [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33')#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.067 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.068 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.068 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] No VIF found with MAC fa:16:3e:45:d6:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.068 2 INFO nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Using config drive#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.093 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.314 2 DEBUG nova.network.neutron [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated VIF entry in instance network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.315 2 DEBUG nova.network.neutron [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.328 2 DEBUG oslo_concurrency.lockutils [req-59defb9b-2df2-4400-bb11-c318449c8120 req-c0565dbb-b301-43a3-9a08-9c45e18c6953 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.364 2 INFO nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Creating config drive at /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.373 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl12csd2r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.517 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl12csd2r" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.575 2 DEBUG nova.storage.rbd_utils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] rbd image d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 12 17:33:09 np0005481680 nova_compute[264665]: 2025-10-12 21:33:09.580 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:33:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:10.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.358 2 DEBUG oslo_concurrency.processutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.359 2 INFO nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Deleting local config drive /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401/disk.config because it was imported into RBD.#033[00m
Oct 12 17:33:10 np0005481680 kernel: tap56287bae-33: entered promiscuous mode
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.4448] manager: (tap56287bae-33): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct 12 17:33:10 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:10Z|00087|binding|INFO|Claiming lport 56287bae-33ab-4007-8c88-0adeea38f1fd for this chassis.
Oct 12 17:33:10 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:10Z|00088|binding|INFO|56287bae-33ab-4007-8c88-0adeea38f1fd: Claiming fa:16:3e:45:d6:1e 10.100.0.4
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 systemd-udevd[285220]: Network interface NamePolicy= disabled on kernel command line.
Oct 12 17:33:10 np0005481680 systemd-machined[218338]: New machine qemu-6-instance-0000000b.
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.4997] device (tap56287bae-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.5016] device (tap56287bae-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.497 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:d6:1e 10.100.0.4'], port_security=['fa:16:3e:45:d6:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd4877f49-ddd8-47a2-9a2f-6c2e26c9f401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd651b6f-1724-42cd-a3ff-037629cdb232', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '2', 'neutron:security_group_ids': '685e57f6-0891-4206-8c34-eec64721202d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59359a0d-cfbb-460a-87ed-6bbf48fcb204, chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=56287bae-33ab-4007-8c88-0adeea38f1fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.499 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 56287bae-33ab-4007-8c88-0adeea38f1fd in datapath bd651b6f-1724-42cd-a3ff-037629cdb232 bound to our chassis#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.500 164459 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd651b6f-1724-42cd-a3ff-037629cdb232#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.518 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a45708-36d3-46c1-9275-5ca27428f742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.519 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd651b6f-11 in ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.523 271121 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd651b6f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.523 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[d31a38c8-fb87-4862-8905-6ed4b809d433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.524 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[44f752d3-b349-4c16-aa91-ec30e7a492fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.561 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[aa38c6b5-f6e4-405a-98a6-e502df73a575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:10Z|00089|binding|INFO|Setting lport 56287bae-33ab-4007-8c88-0adeea38f1fd ovn-installed in OVS
Oct 12 17:33:10 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:10Z|00090|binding|INFO|Setting lport 56287bae-33ab-4007-8c88-0adeea38f1fd up in Southbound
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.587 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[4c62b4cc-391a-4eaa-8133-8c7f26956243]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.627 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[32adc96f-11ba-4df7-bd5a-00450e3c3c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.634 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[0de583b5-1720-4300-bd54-9749098e7e9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.6431] manager: (tapbd651b6f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.679 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[cba5db25-ed3b-4280-bc59-8869ffa516fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.684 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[a5be72f7-2d13-43c9-81ea-a80fe10a10e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.7188] device (tapbd651b6f-10): carrier: link connected
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.724 271215 DEBUG oslo.privsep.daemon [-] privsep: reply[4ffd637e-b448-41fd-9f41-cb2a247e6d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.748 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[81b6ff3d-ef29-4daa-8512-3b626e8d8e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd651b6f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:04:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445735, 'reachable_time': 36313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285253, 'error': None, 'target': 'ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.764 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[f916349b-83b2-435b-82aa-dd0f60c3ec32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:42f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445735, 'tstamp': 445735}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285254, 'error': None, 'target': 'ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.784 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbab057-7d0c-4540-889e-add160a7486a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd651b6f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:04:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445735, 'reachable_time': 36313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285255, 'error': None, 'target': 'ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.818 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaf9792-3be6-4aa2-bf9e-9b3101ae7d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.870 2 DEBUG nova.compute.manager [req-717acb00-4690-4069-8827-fccba32b5921 req-e957d3f2-de3b-4b19-ac11-7c93ffbb0872 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.870 2 DEBUG oslo_concurrency.lockutils [req-717acb00-4690-4069-8827-fccba32b5921 req-e957d3f2-de3b-4b19-ac11-7c93ffbb0872 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.871 2 DEBUG oslo_concurrency.lockutils [req-717acb00-4690-4069-8827-fccba32b5921 req-e957d3f2-de3b-4b19-ac11-7c93ffbb0872 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.871 2 DEBUG oslo_concurrency.lockutils [req-717acb00-4690-4069-8827-fccba32b5921 req-e957d3f2-de3b-4b19-ac11-7c93ffbb0872 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.871 2 DEBUG nova.compute.manager [req-717acb00-4690-4069-8827-fccba32b5921 req-e957d3f2-de3b-4b19-ac11-7c93ffbb0872 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Processing event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.890 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[0358208a-ed31-4f74-99dc-47fea4f167ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.892 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd651b6f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.892 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.893 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd651b6f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:10 np0005481680 NetworkManager[44859]: <info>  [1760304790.9317] manager: (tapbd651b6f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 12 17:33:10 np0005481680 kernel: tapbd651b6f-10: entered promiscuous mode
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.935 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd651b6f-10, col_values=(('external_ids', {'iface-id': '49a4cfe8-484c-4399-81f4-2b104c2453ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:33:10 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:10Z|00091|binding|INFO|Releasing lport 49a4cfe8-484c-4399-81f4-2b104c2453ed from this chassis (sb_readonly=0)
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.938 164459 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd651b6f-1724-42cd-a3ff-037629cdb232.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd651b6f-1724-42cd-a3ff-037629cdb232.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.940 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[11d513db-ce2f-4fd6-b0dd-dc70f0712ba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.941 164459 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: global
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    log         /dev/log local0 debug
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    log-tag     haproxy-metadata-proxy-bd651b6f-1724-42cd-a3ff-037629cdb232
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    user        root
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    group       root
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    maxconn     1024
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    pidfile     /var/lib/neutron/external/pids/bd651b6f-1724-42cd-a3ff-037629cdb232.pid.haproxy
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    daemon
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: defaults
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    log global
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    mode http
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    option httplog
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    option dontlognull
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    option http-server-close
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    option forwardfor
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    retries                 3
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    timeout http-request    30s
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    timeout connect         30s
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    timeout client          32s
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    timeout server          32s
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    timeout http-keep-alive 30s
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: listen listener
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    bind 169.254.169.254:80
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    server metadata /var/lib/neutron/metadata_proxy
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]:    http-request add-header X-OVN-Network-ID bd651b6f-1724-42cd-a3ff-037629cdb232
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 12 17:33:10 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:10.942 164459 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232', 'env', 'PROCESS_TAG=haproxy-bd651b6f-1724-42cd-a3ff-037629cdb232', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd651b6f-1724-42cd-a3ff-037629cdb232.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 12 17:33:10 np0005481680 nova_compute[264665]: 2025-10-12 21:33:10.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:11.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:11 np0005481680 podman[285330]: 2025-10-12 21:33:11.414721558 +0000 UTC m=+0.075045650 container create 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 12 17:33:11 np0005481680 systemd[1]: Started libpod-conmon-32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868.scope.
Oct 12 17:33:11 np0005481680 podman[285330]: 2025-10-12 21:33:11.380140258 +0000 UTC m=+0.040464430 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 12 17:33:11 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:11 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5455a1e4ff202242580d47d1956e01b69c7a3118dad1cabda64bb4ebd4f76ec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:11 np0005481680 podman[285330]: 2025-10-12 21:33:11.50760039 +0000 UTC m=+0.167924562 container init 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 12 17:33:11 np0005481680 podman[285330]: 2025-10-12 21:33:11.517728358 +0000 UTC m=+0.178052490 container start 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.519 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.519 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304791.5184186, d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.519 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] VM Started (Lifecycle Event)#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.524 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.529 2 INFO nova.virt.libvirt.driver [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Instance spawned successfully.#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.529 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 12 17:33:11 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [NOTICE]   (285350) : New worker (285352) forked
Oct 12 17:33:11 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [NOTICE]   (285350) : Loading success.
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.545 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.550 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.557 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.557 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.557 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.558 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.558 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.558 2 DEBUG nova.virt.libvirt.driver [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.577 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.578 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304791.5186884, d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.578 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] VM Paused (Lifecycle Event)#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.604 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.609 2 DEBUG nova.virt.driver [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] Emitting event <LifecycleEvent: 1760304791.5235827, d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.610 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] VM Resumed (Lifecycle Event)#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.618 2 INFO nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Took 8.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.618 2 DEBUG nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.626 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.628 2 DEBUG nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.655 2 INFO nova.compute.manager [None req-f03f8a8b-7e01-4c04-a506-8b74e9a52884 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.674 2 INFO nova.compute.manager [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Took 9.32 seconds to build instance.#033[00m
Oct 12 17:33:11 np0005481680 nova_compute[264665]: 2025-10-12 21:33:11.689 2 DEBUG oslo_concurrency.lockutils [None req-9b96e2d2-dd4e-4db3-91a8-4aa4cec6e2d5 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:33:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:33:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:33:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.976 2 DEBUG nova.compute.manager [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.976 2 DEBUG oslo_concurrency.lockutils [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.977 2 DEBUG oslo_concurrency.lockutils [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.977 2 DEBUG oslo_concurrency.lockutils [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.978 2 DEBUG nova.compute.manager [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:33:12 np0005481680 nova_compute[264665]: 2025-10-12 21:33:12.978 2 WARNING nova.compute.manager [req-1388f5c2-64e8-4038-bddc-00f89638fc57 req-4f59a6c7-2193-4b9d-a7cd-cd80f2cf886f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state None.#033[00m
Oct 12 17:33:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:13.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 12 17:33:13 np0005481680 nova_compute[264665]: 2025-10-12 21:33:13.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:14.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:14 np0005481680 podman[285364]: 2025-10-12 21:33:14.142836644 +0000 UTC m=+0.100225180 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:33:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:15 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:15Z|00092|binding|INFO|Releasing lport 49a4cfe8-484c-4399-81f4-2b104c2453ed from this chassis (sb_readonly=0)
Oct 12 17:33:15 np0005481680 NetworkManager[44859]: <info>  [1760304795.1371] manager: (patch-provnet-e8d293c0-bac9-44f6-849f-722604222b82-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 12 17:33:15 np0005481680 NetworkManager[44859]: <info>  [1760304795.1384] manager: (patch-br-int-to-provnet-e8d293c0-bac9-44f6-849f-722604222b82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 12 17:33:15 np0005481680 nova_compute[264665]: 2025-10-12 21:33:15.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:15 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:15Z|00093|binding|INFO|Releasing lport 49a4cfe8-484c-4399-81f4-2b104c2453ed from this chassis (sb_readonly=0)
Oct 12 17:33:15 np0005481680 nova_compute[264665]: 2025-10-12 21:33:15.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:15 np0005481680 nova_compute[264665]: 2025-10-12 21:33:15.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 12 17:33:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:16.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.309 2 DEBUG nova.compute.manager [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.310 2 DEBUG nova.compute.manager [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing instance network info cache due to event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.310 2 DEBUG oslo_concurrency.lockutils [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.311 2 DEBUG oslo_concurrency.lockutils [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.311 2 DEBUG nova.network.neutron [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:33:16 np0005481680 nova_compute[264665]: 2025-10-12 21:33:16.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:17.257Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:33:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:17.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 12 17:33:17 np0005481680 nova_compute[264665]: 2025-10-12 21:33:17.873 2 DEBUG nova.network.neutron [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated VIF entry in instance network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:33:17 np0005481680 nova_compute[264665]: 2025-10-12 21:33:17.874 2 DEBUG nova.network.neutron [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:33:17 np0005481680 nova_compute[264665]: 2025-10-12 21:33:17.895 2 DEBUG oslo_concurrency.lockutils [req-a5fdef8e-f342-40d2-9879-806bab6000b0 req-bac55da1-d6da-4676-85e9-0acbd8a21ac4 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:33:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:18.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:33:18
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms']
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:33:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:33:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:33:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:18.369 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:18.370 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:18.371 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:18.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:33:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:33:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:33:18 np0005481680 nova_compute[264665]: 2025-10-12 21:33:18.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 12 17:33:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:20.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:21 np0005481680 podman[285391]: 2025-10-12 21:33:21.129630085 +0000 UTC m=+0.082379206 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 12 17:33:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:21.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:21 np0005481680 nova_compute[264665]: 2025-10-12 21:33:21.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:33:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:22] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 12 17:33:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:22] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 12 17:33:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:22.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:23.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:23 np0005481680 podman[285538]: 2025-10-12 21:33:23.491464475 +0000 UTC m=+0.114806831 container exec 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:33:23 np0005481680 podman[285538]: 2025-10-12 21:33:23.623197765 +0000 UTC m=+0.246540111 container exec_died 88c795ee6783342e17fba249200449cb7962ccd4d5e04d62f9831bf937a32521 (image=quay.io/ceph/ceph:v19, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:33:23 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:23Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:d6:1e 10.100.0.4
Oct 12 17:33:23 np0005481680 ovn_controller[154617]: 2025-10-12T21:33:23Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:d6:1e 10.100.0.4
Oct 12 17:33:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:33:23 np0005481680 nova_compute[264665]: 2025-10-12 21:33:23.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:24 np0005481680 podman[285656]: 2025-10-12 21:33:24.36530765 +0000 UTC m=+0.078800205 container exec 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:24 np0005481680 podman[285656]: 2025-10-12 21:33:24.375300395 +0000 UTC m=+0.088792940 container exec_died 0979bac77c05f47d014b68c6c76c57533b035b599f30cff5624b6c1f9dacfa58 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:25 np0005481680 podman[285816]: 2025-10-12 21:33:25.178458552 +0000 UTC m=+0.075031039 container exec 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:33:25 np0005481680 podman[285816]: 2025-10-12 21:33:25.196564562 +0000 UTC m=+0.093137039 container exec_died 231eaabf82cbb4ac74713f43611e91b691a7e3c41a028f585fb79a5462c8866a (image=quay.io/ceph/haproxy:2.3, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-haproxy-nfs-cephfs-compute-0-wruenf)
Oct 12 17:33:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:25.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:25 np0005481680 podman[285885]: 2025-10-12 21:33:25.559183745 +0000 UTC m=+0.093549090 container exec 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct 12 17:33:25 np0005481680 podman[285885]: 2025-10-12 21:33:25.591552988 +0000 UTC m=+0.125918273 container exec_died 16578fdd3043799a5453d0757a9c10c6d5300edec1e85b527f9dcf780d7fbe8b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-keepalived-nfs-cephfs-compute-0-zelovc, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, version=2.2.4, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20)
Oct 12 17:33:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 12 17:33:25 np0005481680 podman[285953]: 2025-10-12 21:33:25.911800004 +0000 UTC m=+0.085485295 container exec ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:25 np0005481680 podman[285953]: 2025-10-12 21:33:25.960711048 +0000 UTC m=+0.134396289 container exec_died ef14d924f13a0edf7527299eee5e022f9df45a281850e3cd3934e8a6d3311c87 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:26.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:26 np0005481680 podman[286027]: 2025-10-12 21:33:26.305947058 +0000 UTC m=+0.083265738 container exec 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:33:26 np0005481680 podman[286027]: 2025-10-12 21:33:26.495292604 +0000 UTC m=+0.272611214 container exec_died 91ae54783666e81ecee9a5a2dba5cbb1a4e36e058b7f658e514ddf9acec89b6e (image=quay.io/ceph/grafana:10.4.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct 12 17:33:26 np0005481680 nova_compute[264665]: 2025-10-12 21:33:26.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:27 np0005481680 podman[286140]: 2025-10-12 21:33:27.095811978 +0000 UTC m=+0.085693001 container exec a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:27 np0005481680 podman[286140]: 2025-10-12 21:33:27.148968 +0000 UTC m=+0.138849033 container exec_died a74f417593f85714abb80f5db387e2a5836f18f2c2caa519436425f90e9dc175 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5adb8c35-1b74-5730-a252-62321f654cd5-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 12 17:33:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:33:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:33:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:27.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:27.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 113 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:28.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:33:28 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health check update: 3 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 12 17:33:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:28.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.859968027 +0000 UTC m=+0.069656013 container create 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:33:28 np0005481680 systemd[1]: Started libpod-conmon-7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061.scope.
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.830342423 +0000 UTC m=+0.040030449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:28 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.978233945 +0000 UTC m=+0.187921971 container init 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.99104244 +0000 UTC m=+0.200730426 container start 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 17:33:28 np0005481680 nova_compute[264665]: 2025-10-12 21:33:28.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.996481199 +0000 UTC m=+0.206169235 container attach 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:33:28 np0005481680 priceless_ritchie[286368]: 167 167
Oct 12 17:33:28 np0005481680 podman[286351]: 2025-10-12 21:33:28.999471995 +0000 UTC m=+0.209159971 container died 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:33:29 np0005481680 systemd[1]: libpod-7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061.scope: Deactivated successfully.
Oct 12 17:33:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-92c0026d1224baf33d9e7b1a8cea3454e41f705b63aee7201e2cc22cfe1523ca-merged.mount: Deactivated successfully.
Oct 12 17:33:29 np0005481680 podman[286351]: 2025-10-12 21:33:29.059363688 +0000 UTC m=+0.269051674 container remove 7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:33:29 np0005481680 systemd[1]: libpod-conmon-7c9bcaa35e8b473df70ad5bdba56d49c5713d297a0e3d3579fb1eaf553aac061.scope: Deactivated successfully.
Oct 12 17:33:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:29.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:29 np0005481680 ceph-mon[73608]: Health check update: 3 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.33272043 +0000 UTC m=+0.077254215 container create b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:33:29 np0005481680 systemd[1]: Started libpod-conmon-b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b.scope.
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.308450143 +0000 UTC m=+0.052983998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:29 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:29 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.447163141 +0000 UTC m=+0.191697006 container init b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.459086575 +0000 UTC m=+0.203620390 container start b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.465040186 +0000 UTC m=+0.209573981 container attach b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 12 17:33:29 np0005481680 practical_mendel[286410]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:33:29 np0005481680 practical_mendel[286410]: --> All data devices are unavailable
Oct 12 17:33:29 np0005481680 systemd[1]: libpod-b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b.scope: Deactivated successfully.
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.867098632 +0000 UTC m=+0.611632467 container died b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 12 17:33:29 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e418565dc162fd5babe729e0b8dbe465c21162b9ecfef805ccd845acb7000b72-merged.mount: Deactivated successfully.
Oct 12 17:33:29 np0005481680 podman[286394]: 2025-10-12 21:33:29.926453231 +0000 UTC m=+0.670987036 container remove b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_mendel, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:33:29 np0005481680 systemd[1]: libpod-conmon-b76d8ae4fd94d7e8aeb0e51442c0f3aa7c92ab90b57fcc48c7be25dbd6f7020b.scope: Deactivated successfully.
Oct 12 17:33:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 4.6 MiB/s wr, 106 op/s
Oct 12 17:33:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:30.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.743504782 +0000 UTC m=+0.065207430 container create a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 12 17:33:30 np0005481680 systemd[1]: Started libpod-conmon-a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d.scope.
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.718481996 +0000 UTC m=+0.040184684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.857565813 +0000 UTC m=+0.179268511 container init a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.869675191 +0000 UTC m=+0.191377829 container start a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.873903179 +0000 UTC m=+0.195605877 container attach a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 17:33:30 np0005481680 competent_lamport[286544]: 167 167
Oct 12 17:33:30 np0005481680 systemd[1]: libpod-a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d.scope: Deactivated successfully.
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.878197368 +0000 UTC m=+0.199900036 container died a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:33:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-efe6d1926f170a6a2550a5fcb0e4df2372084a7b92ca6915255f472bfde2ee69-merged.mount: Deactivated successfully.
Oct 12 17:33:30 np0005481680 podman[286528]: 2025-10-12 21:33:30.938307707 +0000 UTC m=+0.260010355 container remove a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 12 17:33:30 np0005481680 systemd[1]: libpod-conmon-a914453e294d5cda11be2ae220211e90eceeb22e003a1eb2ca252a109f35863d.scope: Deactivated successfully.
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.21484499 +0000 UTC m=+0.072793512 container create fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 17:33:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:31.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.186193831 +0000 UTC m=+0.044142393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:31 np0005481680 systemd[1]: Started libpod-conmon-fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb.scope.
Oct 12 17:33:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7830c5574621ff9aa9b5dfe1edcd5b2f997d27409093b9aa19084eebf3f6a9f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7830c5574621ff9aa9b5dfe1edcd5b2f997d27409093b9aa19084eebf3f6a9f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7830c5574621ff9aa9b5dfe1edcd5b2f997d27409093b9aa19084eebf3f6a9f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:31 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7830c5574621ff9aa9b5dfe1edcd5b2f997d27409093b9aa19084eebf3f6a9f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.361251654 +0000 UTC m=+0.219200166 container init fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.375607179 +0000 UTC m=+0.233555701 container start fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.380411061 +0000 UTC m=+0.238359623 container attach fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:33:31 np0005481680 nova_compute[264665]: 2025-10-12 21:33:31.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]: {
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:    "0": [
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:        {
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "devices": [
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "/dev/loop3"
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            ],
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "lv_name": "ceph_lv0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "lv_size": "21470642176",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "name": "ceph_lv0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "tags": {
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.cluster_name": "ceph",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.crush_device_class": "",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.encrypted": "0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.osd_id": "0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.type": "block",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.vdo": "0",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:                "ceph.with_tpm": "0"
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            },
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "type": "block",
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:            "vg_name": "ceph_vg0"
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:        }
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]:    ]
Oct 12 17:33:31 np0005481680 determined_matsumoto[286586]: }
Oct 12 17:33:31 np0005481680 systemd[1]: libpod-fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb.scope: Deactivated successfully.
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.752614818 +0000 UTC m=+0.610563330 container died fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:33:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7830c5574621ff9aa9b5dfe1edcd5b2f997d27409093b9aa19084eebf3f6a9f3-merged.mount: Deactivated successfully.
Oct 12 17:33:31 np0005481680 podman[286568]: 2025-10-12 21:33:31.813006084 +0000 UTC m=+0.670954606 container remove fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:33:31 np0005481680 systemd[1]: libpod-conmon-fe1bace2268a2b23baf0c51761abd47e1ee4d36b2269224c647fc9f196ec16cb.scope: Deactivated successfully.
Oct 12 17:33:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:32] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 12 17:33:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:32] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Oct 12 17:33:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 4.6 MiB/s wr, 106 op/s
Oct 12 17:33:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:32.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.610951419 +0000 UTC m=+0.078428476 container create d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 17:33:32 np0005481680 systemd[1]: Started libpod-conmon-d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056.scope.
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.57525284 +0000 UTC m=+0.042729957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:32 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.71797012 +0000 UTC m=+0.185447227 container init d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.72854346 +0000 UTC m=+0.196020517 container start d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.733333891 +0000 UTC m=+0.200810968 container attach d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:33:32 np0005481680 pedantic_allen[286716]: 167 167
Oct 12 17:33:32 np0005481680 systemd[1]: libpod-d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056.scope: Deactivated successfully.
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.736936893 +0000 UTC m=+0.204413960 container died d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:33:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-1e726acaa140d9a4986de65fe275e1a53474d1af67ba9ffcbf897ad0be68f007-merged.mount: Deactivated successfully.
Oct 12 17:33:32 np0005481680 podman[286699]: 2025-10-12 21:33:32.788497454 +0000 UTC m=+0.255974521 container remove d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_allen, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:33:32 np0005481680 systemd[1]: libpod-conmon-d209ca8dcca1e2c2ed4b623bd3674920fb0896739cd59ff99db6b6887d25e056.scope: Deactivated successfully.
Oct 12 17:33:33 np0005481680 podman[286741]: 2025-10-12 21:33:33.030204932 +0000 UTC m=+0.072647509 container create 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:33:33 np0005481680 systemd[1]: Started libpod-conmon-269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd.scope.
Oct 12 17:33:33 np0005481680 podman[286741]: 2025-10-12 21:33:32.999831589 +0000 UTC m=+0.042274206 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:33:33 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:33:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23a69912665ca8c7cc5d410a56e5edbdaf4ed4e0cba4cae21d5bfd689ee2cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23a69912665ca8c7cc5d410a56e5edbdaf4ed4e0cba4cae21d5bfd689ee2cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23a69912665ca8c7cc5d410a56e5edbdaf4ed4e0cba4cae21d5bfd689ee2cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:33 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c23a69912665ca8c7cc5d410a56e5edbdaf4ed4e0cba4cae21d5bfd689ee2cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:33:33 np0005481680 podman[286741]: 2025-10-12 21:33:33.14219032 +0000 UTC m=+0.184632947 container init 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 12 17:33:33 np0005481680 podman[286741]: 2025-10-12 21:33:33.155833107 +0000 UTC m=+0.198275684 container start 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:33:33 np0005481680 podman[286741]: 2025-10-12 21:33:33.15988482 +0000 UTC m=+0.202327407 container attach 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:33:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:33.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:33:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:33:34 np0005481680 nova_compute[264665]: 2025-10-12 21:33:33.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:34 np0005481680 lvm[286835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:33:34 np0005481680 lvm[286835]: VG ceph_vg0 finished
Oct 12 17:33:34 np0005481680 exciting_wozniak[286758]: {}
Oct 12 17:33:34 np0005481680 systemd[1]: libpod-269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd.scope: Deactivated successfully.
Oct 12 17:33:34 np0005481680 systemd[1]: libpod-269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd.scope: Consumed 1.579s CPU time.
Oct 12 17:33:34 np0005481680 podman[286741]: 2025-10-12 21:33:34.087590155 +0000 UTC m=+1.130032742 container died 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 12 17:33:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 4.6 MiB/s wr, 106 op/s
Oct 12 17:33:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:34 np0005481680 systemd[1]: var-lib-containers-storage-overlay-7c23a69912665ca8c7cc5d410a56e5edbdaf4ed4e0cba4cae21d5bfd689ee2cb-merged.mount: Deactivated successfully.
Oct 12 17:33:34 np0005481680 podman[286741]: 2025-10-12 21:33:34.15738401 +0000 UTC m=+1.199826587 container remove 269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:33:34 np0005481680 systemd[1]: libpod-conmon-269041e2c67b5c8ba3020692a7d86af2d61cd189033614e7c5bcc973914145bd.scope: Deactivated successfully.
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:34 np0005481680 podman[286851]: 2025-10-12 21:33:34.302280326 +0000 UTC m=+0.129790503 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:33:34 np0005481680 podman[286858]: 2025-10-12 21:33:34.342721334 +0000 UTC m=+0.135779545 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 12 17:33:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 12 17:33:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:36.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:36 np0005481680 nova_compute[264665]: 2025-10-12 21:33:36.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:37.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 12 17:33:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:38.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:38.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:39 np0005481680 nova_compute[264665]: 2025-10-12 21:33:38.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:39 np0005481680 nova_compute[264665]: 2025-10-12 21:33:39.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:39 np0005481680 nova_compute[264665]: 2025-10-12 21:33:39.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 12 17:33:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 12 17:33:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:40.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:40 np0005481680 nova_compute[264665]: 2025-10-12 21:33:40.673 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:40 np0005481680 nova_compute[264665]: 2025-10-12 21:33:40.674 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:41 np0005481680 nova_compute[264665]: 2025-10-12 21:33:41.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:42] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 12 17:33:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:42] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 12 17:33:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 12 17:33:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:42.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.702 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:33:42 np0005481680 nova_compute[264665]: 2025-10-12 21:33:42.702 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:33:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958950481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.202 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.298 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.299 2 DEBUG nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.507 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.509 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4390MB free_disk=59.92185592651367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.509 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.509 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.647 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Instance d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.648 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.649 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:33:43 np0005481680 nova_compute[264665]: 2025-10-12 21:33:43.720 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct 12 17:33:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:33:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430489826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.193 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.202 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.226 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.255 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:33:44 np0005481680 nova_compute[264665]: 2025-10-12 21:33:44.256 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:45 np0005481680 podman[286974]: 2025-10-12 21:33:45.135724134 +0000 UTC m=+0.089265782 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 12 17:33:45 np0005481680 nova_compute[264665]: 2025-10-12 21:33:45.257 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:45 np0005481680 nova_compute[264665]: 2025-10-12 21:33:45.258 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:45 np0005481680 nova_compute[264665]: 2025-10-12 21:33:45.258 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 82 op/s
Oct 12 17:33:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:46.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:46 np0005481680 nova_compute[264665]: 2025-10-12 21:33:46.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:46 np0005481680 nova_compute[264665]: 2025-10-12 21:33:46.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:47.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:47 np0005481680 nova_compute[264665]: 2025-10-12 21:33:47.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:47 np0005481680 nova_compute[264665]: 2025-10-12 21:33:47.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 12 17:33:47 np0005481680 nova_compute[264665]: 2025-10-12 21:33:47.768 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 167 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 71 op/s
Oct 12 17:33:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:48.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:33:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2012112621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:33:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2012112621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:33:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:48.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:49 np0005481680 nova_compute[264665]: 2025-10-12 21:33:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:49.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:49 np0005481680 nova_compute[264665]: 2025-10-12 21:33:49.767 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:49 np0005481680 nova_compute[264665]: 2025-10-12 21:33:49.768 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:33:49 np0005481680 nova_compute[264665]: 2025-10-12 21:33:49.768 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:33:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:50 np0005481680 nova_compute[264665]: 2025-10-12 21:33:50.007 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:50 np0005481680 nova_compute[264665]: 2025-10-12 21:33:50.007 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:50 np0005481680 nova_compute[264665]: 2025-10-12 21:33:50.008 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 12 17:33:50 np0005481680 nova_compute[264665]: 2025-10-12 21:33:50.008 2 DEBUG nova.objects.instance [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:33:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 12 17:33:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:51 np0005481680 nova_compute[264665]: 2025-10-12 21:33:51.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:52] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Oct 12 17:33:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:33:52] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Oct 12 17:33:52 np0005481680 nova_compute[264665]: 2025-10-12 21:33:52.028 2 DEBUG nova.network.neutron [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:33:52 np0005481680 nova_compute[264665]: 2025-10-12 21:33:52.047 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:33:52 np0005481680 nova_compute[264665]: 2025-10-12 21:33:52.048 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 12 17:33:52 np0005481680 podman[287028]: 2025-10-12 21:33:52.118268775 +0000 UTC m=+0.085753962 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:33:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:33:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:52 np0005481680 nova_compute[264665]: 2025-10-12 21:33:52.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:33:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:53.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:54 np0005481680 nova_compute[264665]: 2025-10-12 21:33:54.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:33:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:54 np0005481680 nova_compute[264665]: 2025-10-12 21:33:54.216 2 INFO nova.compute.manager [None req-feddcbc0-ce22-4d86-9604-6480eaefacbc 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Get console output#033[00m
Oct 12 17:33:54 np0005481680 nova_compute[264665]: 2025-10-12 21:33:54.230 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:33:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:33:55 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:55.178 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:33:55 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:33:55.179 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:33:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:55.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.371 2 DEBUG nova.compute.manager [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.372 2 DEBUG nova.compute.manager [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing instance network info cache due to event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.372 2 DEBUG oslo_concurrency.lockutils [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.372 2 DEBUG oslo_concurrency.lockutils [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.373 2 DEBUG nova.network.neutron [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.403 2 DEBUG nova.compute.manager [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.404 2 DEBUG oslo_concurrency.lockutils [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.404 2 DEBUG oslo_concurrency.lockutils [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.405 2 DEBUG oslo_concurrency.lockutils [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.405 2 DEBUG nova.compute.manager [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:33:55 np0005481680 nova_compute[264665]: 2025-10-12 21:33:55.405 2 WARNING nova.compute.manager [req-20a0a20e-8b30-4e8f-9bc0-534e0c8e9f54 req-50fd7882-4990-4213-8965-a27389c7d371 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state None.#033[00m
Oct 12 17:33:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 12 17:33:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:56.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.390 2 INFO nova.compute.manager [None req-317d9a55-4b7d-47d4-8fd5-a056c2011331 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Get console output#033[00m
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.402 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.561 2 DEBUG nova.network.neutron [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated VIF entry in instance network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.562 2 DEBUG nova.network.neutron [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:56 np0005481680 nova_compute[264665]: 2025-10-12 21:33:56.577 2 DEBUG oslo_concurrency.lockutils [req-fc599614-672a-4f2c-82b4-55f5043e9a5b req-c4933856-9c05-4b1d-a7f9-b56d50709d30 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:33:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:57.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:33:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:57.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.477 2 DEBUG nova.compute.manager [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.478 2 DEBUG oslo_concurrency.lockutils [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.478 2 DEBUG oslo_concurrency.lockutils [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.479 2 DEBUG oslo_concurrency.lockutils [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.479 2 DEBUG nova.compute.manager [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:33:57 np0005481680 nova_compute[264665]: 2025-10-12 21:33:57.480 2 WARNING nova.compute.manager [req-2385ac7d-d85e-4303-8f9f-9a561e921805 req-793c6120-70a5-495e-bfce-86fee78e97b9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state None.#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.020 2 DEBUG nova.compute.manager [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.021 2 DEBUG nova.compute.manager [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing instance network info cache due to event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.021 2 DEBUG oslo_concurrency.lockutils [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.022 2 DEBUG oslo_concurrency.lockutils [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.022 2 DEBUG nova.network.neutron [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:33:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct 12 17:33:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:33:58.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.191 2 INFO nova.compute.manager [None req-f5fce62c-be25-4d5b-8ea6-5d47f597a1cb 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Get console output#033[00m
Oct 12 17:33:58 np0005481680 nova_compute[264665]: 2025-10-12 21:33:58.199 629 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 12 17:33:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:33:58.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:33:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:33:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:33:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:33:59.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.581 2 DEBUG nova.compute.manager [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.582 2 DEBUG oslo_concurrency.lockutils [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.583 2 DEBUG oslo_concurrency.lockutils [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.583 2 DEBUG oslo_concurrency.lockutils [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.583 2 DEBUG nova.compute.manager [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:33:59 np0005481680 nova_compute[264665]: 2025-10-12 21:33:59.584 2 WARNING nova.compute.manager [req-83f38d11-64bc-43d0-9bdb-57c4c3e3c03e req-41ce5559-7e12-4f8c-8ba0-ff25c8a7de0b 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state None.#033[00m
Oct 12 17:33:59 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 302 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 12 17:34:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:00.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:00 np0005481680 nova_compute[264665]: 2025-10-12 21:34:00.811 2 DEBUG nova.network.neutron [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated VIF entry in instance network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:34:00 np0005481680 nova_compute[264665]: 2025-10-12 21:34:00.812 2 DEBUG nova.network.neutron [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:34:00 np0005481680 nova_compute[264665]: 2025-10-12 21:34:00.829 2 DEBUG oslo_concurrency.lockutils [req-15868b6d-61ba-42e5-b9cb-c9077eb7d8ff req-816096cf-38c2-4837-9e2a-690f2191c9e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:34:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:01.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.688 2 DEBUG nova.compute.manager [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.688 2 DEBUG oslo_concurrency.lockutils [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.689 2 DEBUG oslo_concurrency.lockutils [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.689 2 DEBUG oslo_concurrency.lockutils [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.689 2 DEBUG nova.compute.manager [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:34:01 np0005481680 nova_compute[264665]: 2025-10-12 21:34:01.690 2 WARNING nova.compute.manager [req-2582c6a4-5194-4413-ad31-319b769fe963 req-78642049-4b7e-43a9-8e6c-5e7e6c27489c 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state None.#033[00m
Oct 12 17:34:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:02] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Oct 12 17:34:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:02] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Oct 12 17:34:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 12 17:34:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:34:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:34:04 np0005481680 nova_compute[264665]: 2025-10-12 21:34:04.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 12 17:34:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:05 np0005481680 podman[287061]: 2025-10-12 21:34:05.153270858 +0000 UTC m=+0.110741377 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 12 17:34:05 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:05.197 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:34:05 np0005481680 podman[287062]: 2025-10-12 21:34:05.221280617 +0000 UTC m=+0.167571022 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:34:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Oct 12 17:34:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:06.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:06 np0005481680 nova_compute[264665]: 2025-10-12 21:34:06.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:07.263Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:07.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.463 2 DEBUG nova.compute.manager [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.463 2 DEBUG nova.compute.manager [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing instance network info cache due to event network-changed-56287bae-33ab-4007-8c88-0adeea38f1fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.464 2 DEBUG oslo_concurrency.lockutils [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.464 2 DEBUG oslo_concurrency.lockutils [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquired lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.465 2 DEBUG nova.network.neutron [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Refreshing network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.557 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.558 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.558 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.558 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.559 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.560 2 INFO nova.compute.manager [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Terminating instance#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.561 2 DEBUG nova.compute.manager [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 12 17:34:07 np0005481680 kernel: tap56287bae-33 (unregistering): left promiscuous mode
Oct 12 17:34:07 np0005481680 NetworkManager[44859]: <info>  [1760304847.6455] device (tap56287bae-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 ovn_controller[154617]: 2025-10-12T21:34:07Z|00094|binding|INFO|Releasing lport 56287bae-33ab-4007-8c88-0adeea38f1fd from this chassis (sb_readonly=0)
Oct 12 17:34:07 np0005481680 ovn_controller[154617]: 2025-10-12T21:34:07Z|00095|binding|INFO|Setting lport 56287bae-33ab-4007-8c88-0adeea38f1fd down in Southbound
Oct 12 17:34:07 np0005481680 ovn_controller[154617]: 2025-10-12T21:34:07Z|00096|binding|INFO|Removing iface tap56287bae-33 ovn-installed in OVS
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:07.681 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:d6:1e 10.100.0.4'], port_security=['fa:16:3e:45:d6:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd4877f49-ddd8-47a2-9a2f-6c2e26c9f401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd651b6f-1724-42cd-a3ff-037629cdb232', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '996cf7b314dd4598812dc5b6cda29b64', 'neutron:revision_number': '8', 'neutron:security_group_ids': '685e57f6-0891-4206-8c34-eec64721202d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59359a0d-cfbb-460a-87ed-6bbf48fcb204, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>], logical_port=56287bae-33ab-4007-8c88-0adeea38f1fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8b01a1c7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:34:07 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:07.684 164459 INFO neutron.agent.ovn.metadata.agent [-] Port 56287bae-33ab-4007-8c88-0adeea38f1fd in datapath bd651b6f-1724-42cd-a3ff-037629cdb232 unbound from our chassis#033[00m
Oct 12 17:34:07 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:07.685 164459 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd651b6f-1724-42cd-a3ff-037629cdb232, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 12 17:34:07 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:07.687 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[70db8a78-9f23-4446-8080-ae2483dd2278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:07 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:07.688 164459 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232 namespace which is not needed anymore#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 12 17:34:07 np0005481680 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 15.833s CPU time.
Oct 12 17:34:07 np0005481680 systemd-machined[218338]: Machine qemu-6-instance-0000000b terminated.
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.803 2 INFO nova.virt.libvirt.driver [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Instance destroyed successfully.#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.804 2 DEBUG nova.objects.instance [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lazy-loading 'resources' on Instance uuid d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.852 2 DEBUG nova.virt.libvirt.vif [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-12T21:33:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-417374243',display_name='tempest-TestNetworkBasicOps-server-417374243',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-417374243',id=11,image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZc3He0shAPkOombcjIUGdP9n1u80HjNEPh6T4ZbjB/U75NhThD8XjiO3TIYuOBcapxnIe10ozz2IXBzeuKlp5zNZh7B6bxabbbz46S6IB5hJcME+xFC5Abfq2h8a/4jw==',key_name='tempest-TestNetworkBasicOps-1268665234',keypairs=<?>,launch_index=0,launched_at=2025-10-12T21:33:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='996cf7b314dd4598812dc5b6cda29b64',ramdisk_id='',reservation_id='r-kgruatmz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0838cede-7f25-4ac2-ae16-04e86e2d6b46',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-977144451',owner_user_name='tempest-TestNetworkBasicOps-977144451-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-12T21:33:11Z,user_data=None,user_id='935f7ca5b6aa4bff9c9b406ff9cf8dc3',uuid=d4877f49-ddd8-47a2-9a2f-6c2e26c9f401,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.853 2 DEBUG nova.network.os_vif_util [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converting VIF {"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.854 2 DEBUG nova.network.os_vif_util [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.854 2 DEBUG os_vif [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56287bae-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:07 np0005481680 nova_compute[264665]: 2025-10-12 21:34:07.865 2 INFO os_vif [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:d6:1e,bridge_name='br-int',has_traffic_filtering=True,id=56287bae-33ab-4007-8c88-0adeea38f1fd,network=Network(bd651b6f-1724-42cd-a3ff-037629cdb232),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56287bae-33')#033[00m
Oct 12 17:34:07 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [NOTICE]   (285350) : haproxy version is 2.8.14-c23fe91
Oct 12 17:34:07 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [NOTICE]   (285350) : path to executable is /usr/sbin/haproxy
Oct 12 17:34:07 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [WARNING]  (285350) : Exiting Master process...
Oct 12 17:34:07 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [WARNING]  (285350) : Exiting Master process...
Oct 12 17:34:08 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [ALERT]    (285350) : Current worker (285352) exited with code 143 (Terminated)
Oct 12 17:34:08 np0005481680 neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232[285346]: [WARNING]  (285350) : All workers exited. Exiting... (0)
Oct 12 17:34:08 np0005481680 systemd[1]: libpod-32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868.scope: Deactivated successfully.
Oct 12 17:34:08 np0005481680 podman[287171]: 2025-10-12 21:34:08.00979118 +0000 UTC m=+0.149082373 container died 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:34:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.150 2 DEBUG nova.compute.manager [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.150 2 DEBUG oslo_concurrency.lockutils [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.151 2 DEBUG oslo_concurrency.lockutils [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.151 2 DEBUG oslo_concurrency.lockutils [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.152 2 DEBUG nova.compute.manager [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.152 2 DEBUG nova.compute.manager [req-61949b5b-8788-4855-b4d4-312bc4c5ad00 req-11fe0b35-3b3c-4908-9d05-d224f9e7e6e9 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-unplugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 12 17:34:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:08.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868-userdata-shm.mount: Deactivated successfully.
Oct 12 17:34:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5455a1e4ff202242580d47d1956e01b69c7a3118dad1cabda64bb4ebd4f76ec1-merged.mount: Deactivated successfully.
Oct 12 17:34:08 np0005481680 podman[287171]: 2025-10-12 21:34:08.306342403 +0000 UTC m=+0.445633606 container cleanup 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:34:08 np0005481680 systemd[1]: libpod-conmon-32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868.scope: Deactivated successfully.
Oct 12 17:34:08 np0005481680 podman[287220]: 2025-10-12 21:34:08.468737613 +0000 UTC m=+0.126412737 container remove 32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.483 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[ba83d3e7-4048-4551-ae80-95123d2dd6de]: (4, ('Sun Oct 12 09:34:07 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232 (32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868)\n32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868\nSun Oct 12 09:34:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232 (32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868)\n32ae2187116d9968aaddda0259cfe2501073eb3792e32f9b6d107a107411b868\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.486 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[4622039f-35fa-413c-978f-5efbbc451447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.488 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd651b6f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:08 np0005481680 kernel: tapbd651b6f-10: left promiscuous mode
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.527 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[267b10c3-23d1-45a1-8640-1593f61c8e50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.568 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[e1715d65-0c76-420d-8033-69ebd5ab7c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.570 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[561e936e-bb77-4b3d-a4e8-974058b48062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.594 271121 DEBUG oslo.privsep.daemon [-] privsep: reply[38b2c3c1-468f-4849-a6b0-dd48167d62b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445725, 'reachable_time': 37994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287237, 'error': None, 'target': 'ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.597 164600 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd651b6f-1724-42cd-a3ff-037629cdb232 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 12 17:34:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:08.597 164600 DEBUG oslo.privsep.daemon [-] privsep: reply[08761d2b-37ba-4401-8ce1-3003a453081e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 12 17:34:08 np0005481680 systemd[1]: run-netns-ovnmeta\x2dbd651b6f\x2d1724\x2d42cd\x2da3ff\x2d037629cdb232.mount: Deactivated successfully.
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.820 2 INFO nova.virt.libvirt.driver [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Deleting instance files /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_del#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.821 2 INFO nova.virt.libvirt.driver [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Deletion of /var/lib/nova/instances/d4877f49-ddd8-47a2-9a2f-6c2e26c9f401_del complete#033[00m
Oct 12 17:34:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:08.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.955 2 INFO nova.compute.manager [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Took 1.39 seconds to destroy the instance on the hypervisor.#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.956 2 DEBUG oslo.service.loopingcall [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.957 2 DEBUG nova.compute.manager [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 12 17:34:08 np0005481680 nova_compute[264665]: 2025-10-12 21:34:08.957 2 DEBUG nova.network.neutron [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 12 17:34:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:09 np0005481680 nova_compute[264665]: 2025-10-12 21:34:09.356 2 DEBUG nova.network.neutron [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updated VIF entry in instance network info cache for port 56287bae-33ab-4007-8c88-0adeea38f1fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 12 17:34:09 np0005481680 nova_compute[264665]: 2025-10-12 21:34:09.357 2 DEBUG nova.network.neutron [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [{"id": "56287bae-33ab-4007-8c88-0adeea38f1fd", "address": "fa:16:3e:45:d6:1e", "network": {"id": "bd651b6f-1724-42cd-a3ff-037629cdb232", "bridge": "br-int", "label": "tempest-network-smoke--147280748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "996cf7b314dd4598812dc5b6cda29b64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56287bae-33", "ovs_interfaceid": "56287bae-33ab-4007-8c88-0adeea38f1fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:34:09 np0005481680 nova_compute[264665]: 2025-10-12 21:34:09.465 2 DEBUG oslo_concurrency.lockutils [req-1854cc19-ced9-41b3-a73a-faf00d2873ea req-cbda676c-6099-4910-a4cc-ff0fce1c049f 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Releasing lock "refresh_cache-d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 12 17:34:09 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.111 2 DEBUG nova.network.neutron [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:34:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct 12 17:34:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:10.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.209 2 INFO nova.compute.manager [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Took 1.25 seconds to deallocate network for instance.#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.269 2 DEBUG nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.270 2 DEBUG oslo_concurrency.lockutils [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Acquiring lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.271 2 DEBUG oslo_concurrency.lockutils [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.271 2 DEBUG oslo_concurrency.lockutils [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.271 2 DEBUG nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] No waiting events found dispatching network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.272 2 WARNING nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received unexpected event network-vif-plugged-56287bae-33ab-4007-8c88-0adeea38f1fd for instance with vm_state active and task_state deleting.#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.272 2 DEBUG nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Received event network-vif-deleted-56287bae-33ab-4007-8c88-0adeea38f1fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.273 2 INFO nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Neutron deleted interface 56287bae-33ab-4007-8c88-0adeea38f1fd; detaching it from the instance and deleting it from the info cache#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.273 2 DEBUG nova.network.neutron [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.317 2 DEBUG nova.compute.manager [req-42dd08ed-d7a4-4563-9a85-1bd23327f71d req-51a5aa31-d5d3-449b-a91b-d2a9afe044ad 88fcc5350ac94b92a894fa3cd2a15442 b443741ac090406e8474a8a133ad5042 - - default default] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Detach interface failed, port_id=56287bae-33ab-4007-8c88-0adeea38f1fd, reason: Instance d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.328 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.329 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.388 2 DEBUG oslo_concurrency.processutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:34:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:34:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3504583969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.892 2 DEBUG oslo_concurrency.processutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.899 2 DEBUG nova.compute.provider_tree [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:34:10 np0005481680 nova_compute[264665]: 2025-10-12 21:34:10.941 2 DEBUG nova.scheduler.client.report [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:34:11 np0005481680 nova_compute[264665]: 2025-10-12 21:34:11.015 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:11 np0005481680 nova_compute[264665]: 2025-10-12 21:34:11.070 2 INFO nova.scheduler.client.report [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Deleted allocations for instance d4877f49-ddd8-47a2-9a2f-6c2e26c9f401#033[00m
Oct 12 17:34:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:11.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:11 np0005481680 nova_compute[264665]: 2025-10-12 21:34:11.320 2 DEBUG oslo_concurrency.lockutils [None req-40891d4d-5f79-494e-8695-64a543b7c5e7 935f7ca5b6aa4bff9c9b406ff9cf8dc3 996cf7b314dd4598812dc5b6cda29b64 - - default default] Lock "d4877f49-ddd8-47a2-9a2f-6c2e26c9f401" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:11 np0005481680 nova_compute[264665]: 2025-10-12 21:34:11.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:12] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 12 17:34:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:12] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct 12 17:34:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct 12 17:34:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:34:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:12.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:34:12 np0005481680 nova_compute[264665]: 2025-10-12 21:34:12.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:34:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:13.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:34:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct 12 17:34:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:14 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:15.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct 12 17:34:16 np0005481680 podman[287268]: 2025-10-12 21:34:16.148283394 +0000 UTC m=+0.100658311 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Oct 12 17:34:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:16 np0005481680 nova_compute[264665]: 2025-10-12 21:34:16.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:17.264Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:34:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:17.265Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:17.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:17 np0005481680 nova_compute[264665]: 2025-10-12 21:34:17.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:18 np0005481680 nova_compute[264665]: 2025-10-12 21:34:18.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:18 np0005481680 nova_compute[264665]: 2025-10-12 21:34:18.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:34:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:18.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:34:18
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', '.nfs', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:34:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:34:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:34:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:18.370 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:18.371 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:34:18.371 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:18.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:34:18 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:34:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:34:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:34:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:34:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:34:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:34:19 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7306 writes, 32K keys, 7306 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7306 writes, 7306 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1565 writes, 6659 keys, 1565 commit groups, 1.0 writes per commit group, ingest: 11.41 MB, 0.02 MB/s#012Interval WAL: 1565 writes, 1565 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    131.7      0.38              0.17        18    0.021       0      0       0.0       0.0#012  L6      1/0   13.22 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.3    153.7    131.1      1.63              0.73        17    0.096     93K   9442       0.0       0.0#012 Sum      1/0   13.22 MB   0.0      0.2     0.0      0.2       0.3      0.1       0.0   5.3    124.6    131.2      2.01              0.90        35    0.058     93K   9442       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    119.9    122.8      0.53              0.25         8    0.067     26K   2569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    153.7    131.1      1.63              0.73        17    0.096     93K   9442       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.5      0.37              0.17        17    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.049, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.11 MB/s write, 0.25 GB read, 0.10 MB/s read, 2.0 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562cd3961350#2 capacity: 304.00 MB usage: 22.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000184 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1235,21.65 MB,7.12237%) FilterBlock(36,272.55 KB,0.0875523%) IndexBlock(36,467.61 KB,0.150214%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 12 17:34:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:34:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:20.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:21 np0005481680 nova_compute[264665]: 2025-10-12 21:34:21.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:22] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 12 17:34:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:22] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 12 17:34:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:22 np0005481680 nova_compute[264665]: 2025-10-12 21:34:22.800 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760304847.7998674, d4877f49-ddd8-47a2-9a2f-6c2e26c9f401 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 12 17:34:22 np0005481680 nova_compute[264665]: 2025-10-12 21:34:22.801 2 INFO nova.compute.manager [-] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] VM Stopped (Lifecycle Event)#033[00m
Oct 12 17:34:22 np0005481680 nova_compute[264665]: 2025-10-12 21:34:22.829 2 DEBUG nova.compute.manager [None req-e493dd27-8721-4576-a744-64c300601953 - - - - - -] [instance: d4877f49-ddd8-47a2-9a2f-6c2e26c9f401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 12 17:34:22 np0005481680 nova_compute[264665]: 2025-10-12 21:34:22.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:23 np0005481680 podman[287296]: 2025-10-12 21:34:23.117151338 +0000 UTC m=+0.076741723 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:34:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:24.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:26 np0005481680 nova_compute[264665]: 2025-10-12 21:34:26.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:27.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:27 np0005481680 nova_compute[264665]: 2025-10-12 21:34:27.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:28.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:28.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:30.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:34:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:34:31 np0005481680 nova_compute[264665]: 2025-10-12 21:34:31.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:32] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 12 17:34:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:32] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Oct 12 17:34:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:32.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:32 np0005481680 nova_compute[264665]: 2025-10-12 21:34:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:33.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:34:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:34:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:34:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:34:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:34:36 np0005481680 podman[287461]: 2025-10-12 21:34:36.001346601 +0000 UTC m=+0.078395215 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:34:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:34:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:34:36 np0005481680 podman[287462]: 2025-10-12 21:34:36.052387788 +0000 UTC m=+0.125622055 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 12 17:34:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.561467417 +0000 UTC m=+0.072781082 container create 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:34:36 np0005481680 systemd[1]: Started libpod-conmon-9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2.scope.
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.531958516 +0000 UTC m=+0.043272261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:36 np0005481680 nova_compute[264665]: 2025-10-12 21:34:36.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:36 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.665609095 +0000 UTC m=+0.176922770 container init 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.679450507 +0000 UTC m=+0.190764202 container start 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.683302845 +0000 UTC m=+0.194616500 container attach 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:34:36 np0005481680 festive_bhabha[287588]: 167 167
Oct 12 17:34:36 np0005481680 systemd[1]: libpod-9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2.scope: Deactivated successfully.
Oct 12 17:34:36 np0005481680 conmon[287588]: conmon 9061c2698df33d9d3e2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2.scope/container/memory.events
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.687992614 +0000 UTC m=+0.199306309 container died 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:34:36 np0005481680 systemd[1]: var-lib-containers-storage-overlay-afe1d70e84a405928806a772b8d357663694e2603bfc412bab0ae2731acccb9a-merged.mount: Deactivated successfully.
Oct 12 17:34:36 np0005481680 podman[287572]: 2025-10-12 21:34:36.74561628 +0000 UTC m=+0.256929935 container remove 9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:34:36 np0005481680 systemd[1]: libpod-conmon-9061c2698df33d9d3e2b790e6c766e61e00f3f2448c89b43867cdacfcb8f79b2.scope: Deactivated successfully.
Oct 12 17:34:36 np0005481680 podman[287614]: 2025-10-12 21:34:36.986089296 +0000 UTC m=+0.068220796 container create 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 12 17:34:37 np0005481680 systemd[1]: Started libpod-conmon-9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6.scope.
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:36.958296569 +0000 UTC m=+0.040428129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:37.092571934 +0000 UTC m=+0.174703474 container init 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:37.100668841 +0000 UTC m=+0.182800341 container start 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:37.103878922 +0000 UTC m=+0.186010462 container attach 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:34:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:37.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:37 np0005481680 dreamy_mccarthy[287630]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:34:37 np0005481680 dreamy_mccarthy[287630]: --> All data devices are unavailable
Oct 12 17:34:37 np0005481680 systemd[1]: libpod-9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6.scope: Deactivated successfully.
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:37.551891857 +0000 UTC m=+0.634023387 container died 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:34:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:34:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3b93edccdd8d7086119be61543613f8d4b6c1357b2ef1391c4d6fdde6231d123-merged.mount: Deactivated successfully.
Oct 12 17:34:37 np0005481680 podman[287614]: 2025-10-12 21:34:37.619109336 +0000 UTC m=+0.701240866 container remove 9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:34:37 np0005481680 systemd[1]: libpod-conmon-9e2cae8c64fdaa422362b0ba75ae62c45bfbad1679db8d144875c1149ee0f3f6.scope: Deactivated successfully.
Oct 12 17:34:37 np0005481680 nova_compute[264665]: 2025-10-12 21:34:37.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.453377305 +0000 UTC m=+0.067029186 container create 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:34:38 np0005481680 systemd[1]: Started libpod-conmon-7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa.scope.
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.429283052 +0000 UTC m=+0.042934923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.543635771 +0000 UTC m=+0.157287672 container init 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.555927723 +0000 UTC m=+0.169579624 container start 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.560557001 +0000 UTC m=+0.174208892 container attach 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:34:38 np0005481680 romantic_sinoussi[287766]: 167 167
Oct 12 17:34:38 np0005481680 systemd[1]: libpod-7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa.scope: Deactivated successfully.
Oct 12 17:34:38 np0005481680 conmon[287766]: conmon 7025eb3d8dc28c8e66b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa.scope/container/memory.events
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.56602297 +0000 UTC m=+0.179674861 container died 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:34:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-230b25954a256f3b13b836b94f85543326bed47b5fdba3f2ea032655766f9d98-merged.mount: Deactivated successfully.
Oct 12 17:34:38 np0005481680 podman[287751]: 2025-10-12 21:34:38.617671963 +0000 UTC m=+0.231323854 container remove 7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:34:38 np0005481680 systemd[1]: libpod-conmon-7025eb3d8dc28c8e66b8970a7a660f364b1d8c12c50f9f6d24897d7641b867aa.scope: Deactivated successfully.
Oct 12 17:34:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:38.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:38.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:34:38 np0005481680 podman[287791]: 2025-10-12 21:34:38.891664282 +0000 UTC m=+0.074860634 container create a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:34:38 np0005481680 systemd[1]: Started libpod-conmon-a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63.scope.
Oct 12 17:34:38 np0005481680 podman[287791]: 2025-10-12 21:34:38.859658148 +0000 UTC m=+0.042854560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:38 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce9ed3509dd0435a2038da83c7ab925a25febd74f290847467223124db2d6c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce9ed3509dd0435a2038da83c7ab925a25febd74f290847467223124db2d6c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce9ed3509dd0435a2038da83c7ab925a25febd74f290847467223124db2d6c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:38 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce9ed3509dd0435a2038da83c7ab925a25febd74f290847467223124db2d6c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:39 np0005481680 podman[287791]: 2025-10-12 21:34:39.016175549 +0000 UTC m=+0.199371931 container init a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 12 17:34:39 np0005481680 podman[287791]: 2025-10-12 21:34:39.028151154 +0000 UTC m=+0.211347516 container start a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:34:39 np0005481680 podman[287791]: 2025-10-12 21:34:39.032620008 +0000 UTC m=+0.215816410 container attach a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:34:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]: {
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:    "0": [
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:        {
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "devices": [
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "/dev/loop3"
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            ],
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "lv_name": "ceph_lv0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "lv_size": "21470642176",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "name": "ceph_lv0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "tags": {
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.cluster_name": "ceph",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.crush_device_class": "",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.encrypted": "0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.osd_id": "0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.type": "block",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.vdo": "0",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:                "ceph.with_tpm": "0"
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            },
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "type": "block",
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:            "vg_name": "ceph_vg0"
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:        }
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]:    ]
Oct 12 17:34:39 np0005481680 crazy_mcclintock[287809]: }
Oct 12 17:34:39 np0005481680 systemd[1]: libpod-a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63.scope: Deactivated successfully.
Oct 12 17:34:39 np0005481680 podman[287791]: 2025-10-12 21:34:39.436052758 +0000 UTC m=+0.619249150 container died a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:34:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6ce9ed3509dd0435a2038da83c7ab925a25febd74f290847467223124db2d6c2-merged.mount: Deactivated successfully.
Oct 12 17:34:39 np0005481680 podman[287791]: 2025-10-12 21:34:39.502732455 +0000 UTC m=+0.685928807 container remove a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 17:34:39 np0005481680 systemd[1]: libpod-conmon-a6188bb4dc4219a1dddcfc5d2fb84f9236b7fe6976c2c1d75fbf49e75f6d6c63.scope: Deactivated successfully.
Oct 12 17:34:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:40.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.362582064 +0000 UTC m=+0.070165176 container create 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.33415168 +0000 UTC m=+0.041734792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:40 np0005481680 systemd[1]: Started libpod-conmon-5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb.scope.
Oct 12 17:34:40 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.524969864 +0000 UTC m=+0.232553026 container init 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.535614054 +0000 UTC m=+0.243197156 container start 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.540549 +0000 UTC m=+0.248132152 container attach 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 12 17:34:40 np0005481680 sweet_diffie[287941]: 167 167
Oct 12 17:34:40 np0005481680 systemd[1]: libpod-5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb.scope: Deactivated successfully.
Oct 12 17:34:40 np0005481680 podman[287925]: 2025-10-12 21:34:40.544361857 +0000 UTC m=+0.251944949 container died 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.672381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304880672452, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1142, "num_deletes": 251, "total_data_size": 1959444, "memory_usage": 1981152, "flush_reason": "Manual Compaction"}
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 12 17:34:40 np0005481680 nova_compute[264665]: 2025-10-12 21:34:40.673 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:40 np0005481680 nova_compute[264665]: 2025-10-12 21:34:40.676 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304880734635, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1906816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31549, "largest_seqno": 32690, "table_properties": {"data_size": 1901385, "index_size": 2827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11001, "raw_average_key_size": 18, "raw_value_size": 1890459, "raw_average_value_size": 3150, "num_data_blocks": 124, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304781, "oldest_key_time": 1760304781, "file_creation_time": 1760304880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 62306 microseconds, and 8262 cpu microseconds.
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:34:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-63bceefa794bb91479b1a0d28fa4f004d37af3fd2474368f46c3b2f985140824-merged.mount: Deactivated successfully.
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.734696) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1906816 bytes OK
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.734720) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.775523) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.775572) EVENT_LOG_v1 {"time_micros": 1760304880775560, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.775600) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1954295, prev total WAL file size 1954295, number of live WAL files 2.
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.777240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1862KB)], [68(13MB)]
Oct 12 17:34:40 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304880777293, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 15768016, "oldest_snapshot_seqno": -1}
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6261 keys, 14538464 bytes, temperature: kUnknown
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304881109400, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 14538464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14496752, "index_size": 24962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 161762, "raw_average_key_size": 25, "raw_value_size": 14384008, "raw_average_value_size": 2297, "num_data_blocks": 991, "num_entries": 6261, "num_filter_entries": 6261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:34:41 np0005481680 podman[287925]: 2025-10-12 21:34:41.210547831 +0000 UTC m=+0.918130923 container remove 5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.109971) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 14538464 bytes
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.211164) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.5 rd, 43.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 13.2 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(15.9) write-amplify(7.6) OK, records in: 6777, records dropped: 516 output_compression: NoCompression
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.211214) EVENT_LOG_v1 {"time_micros": 1760304881211194, "job": 38, "event": "compaction_finished", "compaction_time_micros": 332210, "compaction_time_cpu_micros": 54864, "output_level": 6, "num_output_files": 1, "total_output_size": 14538464, "num_input_records": 6777, "num_output_records": 6261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304881212338, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304881216649, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:40.777183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.216699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.216705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.216708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.216710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:34:41.216713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:34:41 np0005481680 systemd[1]: libpod-conmon-5475d48bae473ae1a3b104595edcc2408843843b91a57e0ae10a46dae469c8bb.scope: Deactivated successfully.
Oct 12 17:34:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:34:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:41.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:34:41 np0005481680 podman[287965]: 2025-10-12 21:34:41.479854062 +0000 UTC m=+0.094613989 container create dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:34:41 np0005481680 podman[287965]: 2025-10-12 21:34:41.427852709 +0000 UTC m=+0.042612666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:34:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Oct 12 17:34:41 np0005481680 systemd[1]: Started libpod-conmon-dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b.scope.
Oct 12 17:34:41 np0005481680 nova_compute[264665]: 2025-10-12 21:34:41.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:41 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:34:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68febb20abce27dc57cd9963032208bdd5e2d0bb27f4aae1b09af6f8542a49bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68febb20abce27dc57cd9963032208bdd5e2d0bb27f4aae1b09af6f8542a49bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68febb20abce27dc57cd9963032208bdd5e2d0bb27f4aae1b09af6f8542a49bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68febb20abce27dc57cd9963032208bdd5e2d0bb27f4aae1b09af6f8542a49bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:34:41 np0005481680 podman[287965]: 2025-10-12 21:34:41.699696612 +0000 UTC m=+0.314456599 container init dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:34:41 np0005481680 podman[287965]: 2025-10-12 21:34:41.71573339 +0000 UTC m=+0.330493317 container start dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:34:41 np0005481680 podman[287965]: 2025-10-12 21:34:41.719863075 +0000 UTC m=+0.334623002 container attach dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:34:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:42] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:34:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:42] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:34:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:42.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:42 np0005481680 lvm[288056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:34:42 np0005481680 lvm[288056]: VG ceph_vg0 finished
Oct 12 17:34:42 np0005481680 laughing_merkle[287981]: {}
Oct 12 17:34:42 np0005481680 systemd[1]: libpod-dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b.scope: Deactivated successfully.
Oct 12 17:34:42 np0005481680 podman[287965]: 2025-10-12 21:34:42.673903069 +0000 UTC m=+1.288663006 container died dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:34:42 np0005481680 systemd[1]: libpod-dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b.scope: Consumed 1.579s CPU time.
Oct 12 17:34:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-68febb20abce27dc57cd9963032208bdd5e2d0bb27f4aae1b09af6f8542a49bf-merged.mount: Deactivated successfully.
Oct 12 17:34:42 np0005481680 nova_compute[264665]: 2025-10-12 21:34:42.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:43 np0005481680 podman[287965]: 2025-10-12 21:34:43.075534265 +0000 UTC m=+1.690294202 container remove dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_merkle, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:34:43 np0005481680 systemd[1]: libpod-conmon-dcb9a47c9bc0fa94142f1449943f25e4885a760a79d32bc9c56036d8ccf7801b.scope: Deactivated successfully.
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:34:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:43.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.9 MiB/s wr, 28 op/s
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.685 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.686 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.686 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.686 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:34:43 np0005481680 nova_compute[264665]: 2025-10-12 21:34:43.687 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:34:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:34:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951433844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.202 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:34:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:44.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.485 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.488 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4515MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.488 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.489 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.571 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.572 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.637 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing inventories for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.655 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating ProviderTree inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.655 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.674 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing aggregate associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.700 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing trait associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 12 17:34:44 np0005481680 nova_compute[264665]: 2025-10-12 21:34:44.732 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:34:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:34:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133139446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:34:45 np0005481680 nova_compute[264665]: 2025-10-12 21:34:45.224 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:34:45 np0005481680 nova_compute[264665]: 2025-10-12 21:34:45.232 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:34:45 np0005481680 nova_compute[264665]: 2025-10-12 21:34:45.250 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:34:45 np0005481680 nova_compute[264665]: 2025-10-12 21:34:45.271 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:34:45 np0005481680 nova_compute[264665]: 2025-10-12 21:34:45.271 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:34:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 873 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct 12 17:34:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:46 np0005481680 nova_compute[264665]: 2025-10-12 21:34:46.272 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:46 np0005481680 nova_compute[264665]: 2025-10-12 21:34:46.272 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:46 np0005481680 nova_compute[264665]: 2025-10-12 21:34:46.273 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:34:46 np0005481680 nova_compute[264665]: 2025-10-12 21:34:46.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:47 np0005481680 podman[288172]: 2025-10-12 21:34:47.141222881 +0000 UTC m=+0.093538561 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 12 17:34:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:47.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 832 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 12 17:34:47 np0005481680 nova_compute[264665]: 2025-10-12 21:34:47.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:48.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:34:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:34:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:34:48 np0005481680 nova_compute[264665]: 2025-10-12 21:34:48.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:48.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:49.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 12 17:34:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:50 np0005481680 nova_compute[264665]: 2025-10-12 21:34:50.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:50 np0005481680 nova_compute[264665]: 2025-10-12 21:34:50.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:34:50 np0005481680 nova_compute[264665]: 2025-10-12 21:34:50.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:34:50 np0005481680 nova_compute[264665]: 2025-10-12 21:34:50.688 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:34:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:51.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:34:51 np0005481680 nova_compute[264665]: 2025-10-12 21:34:51.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:52] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:34:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:34:52] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:34:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:52 np0005481680 nova_compute[264665]: 2025-10-12 21:34:52.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:53.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 12 17:34:53 np0005481680 nova_compute[264665]: 2025-10-12 21:34:53.683 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:34:54 np0005481680 podman[288199]: 2025-10-12 21:34:54.123350593 +0000 UTC m=+0.084133731 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 12 17:34:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:34:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:55 np0005481680 ovn_controller[154617]: 2025-10-12T21:34:55Z|00097|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 12 17:34:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 93 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 969 KiB/s wr, 89 op/s
Oct 12 17:34:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:56 np0005481680 nova_compute[264665]: 2025-10-12 21:34:56.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:57.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 93 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 956 KiB/s wr, 52 op/s
Oct 12 17:34:57 np0005481680 nova_compute[264665]: 2025-10-12 21:34:57.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:34:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:34:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:34:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:34:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:34:58.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:34:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:34:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:34:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:34:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:34:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 12 17:35:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:01.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 12 17:35:01 np0005481680 nova_compute[264665]: 2025-10-12 21:35:01.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:02] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:35:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:02] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Oct 12 17:35:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:02 np0005481680 nova_compute[264665]: 2025-10-12 21:35:02.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:03 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-crash-compute-0[79043]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct 12 17:35:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:35:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:35:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:03.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 12 17:35:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 12 17:35:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:06 np0005481680 nova_compute[264665]: 2025-10-12 21:35:06.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:07 np0005481680 podman[288256]: 2025-10-12 21:35:07.176581676 +0000 UTC m=+0.129288610 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible)
Oct 12 17:35:07 np0005481680 podman[288257]: 2025-10-12 21:35:07.195054586 +0000 UTC m=+0.143514112 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 12 17:35:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:07.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:07.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Oct 12 17:35:07 np0005481680 nova_compute[264665]: 2025-10-12 21:35:07.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:08.168 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:35:08 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:08.170 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:35:08 np0005481680 nova_compute[264665]: 2025-10-12 21:35:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:08.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Oct 12 17:35:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:10.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:10 np0005481680 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 12 17:35:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 12 17:35:11 np0005481680 nova_compute[264665]: 2025-10-12 21:35:11.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:12] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Oct 12 17:35:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:12] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Oct 12 17:35:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:12.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:12 np0005481680 nova_compute[264665]: 2025-10-12 21:35:12.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:13 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:13.173 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:35:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:35:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:35:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 12 17:35:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:14.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:15.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct 12 17:35:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:16.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:16 np0005481680 nova_compute[264665]: 2025-10-12 21:35:16.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:17.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:17.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Oct 12 17:35:17 np0005481680 nova_compute[264665]: 2025-10-12 21:35:17.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:18 np0005481680 podman[288315]: 2025-10-12 21:35:18.121472465 +0000 UTC m=+0.084217034 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:35:18
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', '.nfs', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.meta']
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:35:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:35:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:35:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:18.371 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:35:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:18.372 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:35:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:35:18.372 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:18 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:35:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Oct 12 17:35:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:21.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:35:21 np0005481680 nova_compute[264665]: 2025-10-12 21:35:21.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:22] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:22] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:22.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:23 np0005481680 nova_compute[264665]: 2025-10-12 21:35:22.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:23.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:35:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:24.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:25 np0005481680 podman[288343]: 2025-10-12 21:35:25.131968557 +0000 UTC m=+0.080742855 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 12 17:35:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 12 17:35:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:26 np0005481680 nova_compute[264665]: 2025-10-12 21:35:26.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:27.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:35:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:27.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:27.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:28 np0005481680 nova_compute[264665]: 2025-10-12 21:35:28.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:31.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:31 np0005481680 nova_compute[264665]: 2025-10-12 21:35:31.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:32] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:32] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:33 np0005481680 nova_compute[264665]: 2025-10-12 21:35:33.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:35:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:35:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:33.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:34.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:35.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:36.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:36 np0005481680 nova_compute[264665]: 2025-10-12 21:35:36.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:37.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:38 np0005481680 nova_compute[264665]: 2025-10-12 21:35:38.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:38 np0005481680 podman[288402]: 2025-10-12 21:35:38.149244159 +0000 UTC m=+0.098927578 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:35:38 np0005481680 podman[288403]: 2025-10-12 21:35:38.218724805 +0000 UTC m=+0.161355204 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 12 17:35:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:38.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:39.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:41.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:41 np0005481680 nova_compute[264665]: 2025-10-12 21:35:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:42] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:42] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:42 np0005481680 nova_compute[264665]: 2025-10-12 21:35:42.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:42 np0005481680 nova_compute[264665]: 2025-10-12 21:35:42.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:43 np0005481680 nova_compute[264665]: 2025-10-12 21:35:43.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:43.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:43 np0005481680 nova_compute[264665]: 2025-10-12 21:35:43.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:44.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:35:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:35:44 np0005481680 nova_compute[264665]: 2025-10-12 21:35:44.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:44 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:35:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.371321653 +0000 UTC m=+0.095820498 container create 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.313677638 +0000 UTC m=+0.038176543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:35:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:35:45 np0005481680 systemd[1]: Started libpod-conmon-09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449.scope.
Oct 12 17:35:45 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.521719258 +0000 UTC m=+0.246218173 container init 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.531322872 +0000 UTC m=+0.255821687 container start 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:35:45 np0005481680 nice_khayyam[288646]: 167 167
Oct 12 17:35:45 np0005481680 systemd[1]: libpod-09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449.scope: Deactivated successfully.
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.549857574 +0000 UTC m=+0.274356479 container attach 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.550733776 +0000 UTC m=+0.275232591 container died 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 12 17:35:45 np0005481680 systemd[1]: var-lib-containers-storage-overlay-28e1b8307d196d6b888a85f2d2df378f79de4cbbdcb275718a2f0eba81630e9f-merged.mount: Deactivated successfully.
Oct 12 17:35:45 np0005481680 podman[288630]: 2025-10-12 21:35:45.651654043 +0000 UTC m=+0.376152858 container remove 09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:45 np0005481680 systemd[1]: libpod-conmon-09bd1a4039ff6320090d620240a4675a96b0a01f54b31dff42b8395bc0a0b449.scope: Deactivated successfully.
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.703 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.703 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.703 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.704 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:35:45 np0005481680 nova_compute[264665]: 2025-10-12 21:35:45.704 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:35:45 np0005481680 podman[288675]: 2025-10-12 21:35:45.945111807 +0000 UTC m=+0.100484167 container create 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:35:45 np0005481680 podman[288675]: 2025-10-12 21:35:45.871947016 +0000 UTC m=+0.027319396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:46 np0005481680 systemd[1]: Started libpod-conmon-697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6.scope.
Oct 12 17:35:46 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:46 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:35:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042989227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:35:46 np0005481680 podman[288675]: 2025-10-12 21:35:46.168676473 +0000 UTC m=+0.324048873 container init 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.168 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:35:46 np0005481680 podman[288675]: 2025-10-12 21:35:46.177531188 +0000 UTC m=+0.332903548 container start 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:35:46 np0005481680 podman[288675]: 2025-10-12 21:35:46.211246306 +0000 UTC m=+0.366618696 container attach 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:35:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:35:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.413 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.416 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.416 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.417 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.489 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.490 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:35:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.515 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:35:46 np0005481680 sweet_mendeleev[288735]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:35:46 np0005481680 sweet_mendeleev[288735]: --> All data devices are unavailable
Oct 12 17:35:46 np0005481680 systemd[1]: libpod-697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6.scope: Deactivated successfully.
Oct 12 17:35:46 np0005481680 podman[288675]: 2025-10-12 21:35:46.619820468 +0000 UTC m=+0.775192868 container died 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 17:35:46 np0005481680 nova_compute[264665]: 2025-10-12 21:35:46.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:46 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5882a89ef59e9b425e182b10184dcc688a2bcc799187d8429ea850a48a56e787-merged.mount: Deactivated successfully.
Oct 12 17:35:46 np0005481680 podman[288675]: 2025-10-12 21:35:46.923265415 +0000 UTC m=+1.078637805 container remove 697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:35:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:35:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687035469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:35:47 np0005481680 systemd[1]: libpod-conmon-697f9336d622bf14859a6e648576ce705ad48ccdea838d45974c881557e9c2a6.scope: Deactivated successfully.
Oct 12 17:35:47 np0005481680 nova_compute[264665]: 2025-10-12 21:35:47.013 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:35:47 np0005481680 nova_compute[264665]: 2025-10-12 21:35:47.021 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:35:47 np0005481680 nova_compute[264665]: 2025-10-12 21:35:47.036 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:35:47 np0005481680 nova_compute[264665]: 2025-10-12 21:35:47.038 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:35:47 np0005481680 nova_compute[264665]: 2025-10-12 21:35:47.038 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:35:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:47.277Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:35:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:47.277Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:47.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.69290535 +0000 UTC m=+0.067511128 container create ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:35:47 np0005481680 systemd[1]: Started libpod-conmon-ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd.scope.
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.664805205 +0000 UTC m=+0.039411033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:47 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.807829373 +0000 UTC m=+0.182435211 container init ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.82069242 +0000 UTC m=+0.195298168 container start ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.825580785 +0000 UTC m=+0.200186633 container attach ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:35:47 np0005481680 peaceful_stonebraker[288899]: 167 167
Oct 12 17:35:47 np0005481680 systemd[1]: libpod-ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd.scope: Deactivated successfully.
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.829305039 +0000 UTC m=+0.203910807 container died ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:35:47 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6c61a50185fe2ed27eea353fd34226398076b24f5dd82aa90bff974dea69c5d6-merged.mount: Deactivated successfully.
Oct 12 17:35:47 np0005481680 podman[288881]: 2025-10-12 21:35:47.877211188 +0000 UTC m=+0.251816936 container remove ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 12 17:35:47 np0005481680 systemd[1]: libpod-conmon-ec0128d1afcb9ef83396843bd714a2f5b1d7cef9049a4bfa138af3d0edc51efd.scope: Deactivated successfully.
Oct 12 17:35:48 np0005481680 nova_compute[264665]: 2025-10-12 21:35:48.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:48 np0005481680 nova_compute[264665]: 2025-10-12 21:35:48.038 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:48 np0005481680 nova_compute[264665]: 2025-10-12 21:35:48.039 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:48 np0005481680 nova_compute[264665]: 2025-10-12 21:35:48.039 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.150703333 +0000 UTC m=+0.078898227 container create 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:35:48 np0005481680 systemd[1]: Started libpod-conmon-1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b.scope.
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.11438642 +0000 UTC m=+0.042581364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:48 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e81ecbd61f73ad634e99b92b76ea144f60f1430e379463f0e2fd908067cbc87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e81ecbd61f73ad634e99b92b76ea144f60f1430e379463f0e2fd908067cbc87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e81ecbd61f73ad634e99b92b76ea144f60f1430e379463f0e2fd908067cbc87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:48 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e81ecbd61f73ad634e99b92b76ea144f60f1430e379463f0e2fd908067cbc87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.265452832 +0000 UTC m=+0.193647716 container init 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.27793704 +0000 UTC m=+0.206131934 container start 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.282374112 +0000 UTC m=+0.210569016 container attach 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:35:48 np0005481680 podman[288938]: 2025-10-12 21:35:48.339492365 +0000 UTC m=+0.127616596 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:35:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:48.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:35:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:48 np0005481680 elated_newton[288941]: {
Oct 12 17:35:48 np0005481680 elated_newton[288941]:    "0": [
Oct 12 17:35:48 np0005481680 elated_newton[288941]:        {
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "devices": [
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "/dev/loop3"
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            ],
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "lv_name": "ceph_lv0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "lv_size": "21470642176",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "name": "ceph_lv0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "tags": {
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.cluster_name": "ceph",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.crush_device_class": "",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.encrypted": "0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.osd_id": "0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.type": "block",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.vdo": "0",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:                "ceph.with_tpm": "0"
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            },
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "type": "block",
Oct 12 17:35:48 np0005481680 elated_newton[288941]:            "vg_name": "ceph_vg0"
Oct 12 17:35:48 np0005481680 elated_newton[288941]:        }
Oct 12 17:35:48 np0005481680 elated_newton[288941]:    ]
Oct 12 17:35:48 np0005481680 elated_newton[288941]: }
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352326264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:35:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352326264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.591812672 +0000 UTC m=+0.520007556 container died 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:35:48 np0005481680 systemd[1]: libpod-1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b.scope: Deactivated successfully.
Oct 12 17:35:48 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9e81ecbd61f73ad634e99b92b76ea144f60f1430e379463f0e2fd908067cbc87-merged.mount: Deactivated successfully.
Oct 12 17:35:48 np0005481680 podman[288924]: 2025-10-12 21:35:48.665954118 +0000 UTC m=+0.594148972 container remove 1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:35:48 np0005481680 systemd[1]: libpod-conmon-1c77252e2c0469a0e982e87f63d48995121bd5fbea05888ac36e497b04ecf41b.scope: Deactivated successfully.
Oct 12 17:35:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.436182078 +0000 UTC m=+0.066971475 container create 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:35:49 np0005481680 systemd[1]: Started libpod-conmon-837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135.scope.
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.410269048 +0000 UTC m=+0.041058475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.541923818 +0000 UTC m=+0.172713255 container init 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.556031407 +0000 UTC m=+0.186820844 container start 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.560452129 +0000 UTC m=+0.191241566 container attach 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:35:49 np0005481680 zealous_golick[289087]: 167 167
Oct 12 17:35:49 np0005481680 systemd[1]: libpod-837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135.scope: Deactivated successfully.
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.565371313 +0000 UTC m=+0.196160720 container died 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:35:49 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5af1dab8f2ac8e60b8ed9dd16177b9226996ba8b1f5abf27c1d4a04effb1f5ec-merged.mount: Deactivated successfully.
Oct 12 17:35:49 np0005481680 podman[289071]: 2025-10-12 21:35:49.6041328 +0000 UTC m=+0.234922207 container remove 837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:35:49 np0005481680 systemd[1]: libpod-conmon-837c9bafe6c973d57e8cd21ae794131b1b5bdd9511f95b3d2b7e2bec91a24135.scope: Deactivated successfully.
Oct 12 17:35:49 np0005481680 nova_compute[264665]: 2025-10-12 21:35:49.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:49 np0005481680 podman[289113]: 2025-10-12 21:35:49.840335657 +0000 UTC m=+0.055117793 container create b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:35:49 np0005481680 systemd[1]: Started libpod-conmon-b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc.scope.
Oct 12 17:35:49 np0005481680 podman[289113]: 2025-10-12 21:35:49.811205286 +0000 UTC m=+0.025987432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:35:49 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:35:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f2bb0ce171781328a731537513313db7dbcd10b3e51450644e773ecff5226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f2bb0ce171781328a731537513313db7dbcd10b3e51450644e773ecff5226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f2bb0ce171781328a731537513313db7dbcd10b3e51450644e773ecff5226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:49 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f2bb0ce171781328a731537513313db7dbcd10b3e51450644e773ecff5226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:35:49 np0005481680 podman[289113]: 2025-10-12 21:35:49.966878706 +0000 UTC m=+0.181660852 container init b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:35:49 np0005481680 podman[289113]: 2025-10-12 21:35:49.978676595 +0000 UTC m=+0.193458731 container start b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:35:49 np0005481680 podman[289113]: 2025-10-12 21:35:49.982828151 +0000 UTC m=+0.197610287 container attach b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:35:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:50 np0005481680 lvm[289204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:35:50 np0005481680 lvm[289204]: VG ceph_vg0 finished
Oct 12 17:35:50 np0005481680 youthful_poincare[289129]: {}
Oct 12 17:35:50 np0005481680 systemd[1]: libpod-b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc.scope: Deactivated successfully.
Oct 12 17:35:50 np0005481680 systemd[1]: libpod-b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc.scope: Consumed 1.526s CPU time.
Oct 12 17:35:50 np0005481680 podman[289113]: 2025-10-12 21:35:50.825841222 +0000 UTC m=+1.040623358 container died b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:35:50 np0005481680 systemd[1]: var-lib-containers-storage-overlay-3b0f2bb0ce171781328a731537513313db7dbcd10b3e51450644e773ecff5226-merged.mount: Deactivated successfully.
Oct 12 17:35:50 np0005481680 podman[289113]: 2025-10-12 21:35:50.885638273 +0000 UTC m=+1.100420409 container remove b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_poincare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:35:50 np0005481680 systemd[1]: libpod-conmon-b36f65214c706ec76fb8140255f474f2bf0dcf2e8e164af2f5a6fc14324bdafc.scope: Deactivated successfully.
Oct 12 17:35:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:35:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:35:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:35:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:51.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:35:51 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:51 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:35:51 np0005481680 nova_compute[264665]: 2025-10-12 21:35:51.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:52] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:35:52] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:35:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:52.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:52 np0005481680 nova_compute[264665]: 2025-10-12 21:35:52.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:35:52 np0005481680 nova_compute[264665]: 2025-10-12 21:35:52.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:35:52 np0005481680 nova_compute[264665]: 2025-10-12 21:35:52.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:35:52 np0005481680 nova_compute[264665]: 2025-10-12 21:35:52.691 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:35:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:35:52 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3002 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2014 writes, 6494 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 7.68 MB, 0.01 MB/s#012Interval WAL: 2014 writes, 861 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:35:53 np0005481680 nova_compute[264665]: 2025-10-12 21:35:53.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:53.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:54.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:35:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:56 np0005481680 podman[289252]: 2025-10-12 21:35:56.114440752 +0000 UTC m=+0.077189514 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 12 17:35:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:56.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:56 np0005481680 nova_compute[264665]: 2025-10-12 21:35:56.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:57.280Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:57.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:35:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:57.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:58 np0005481680 nova_compute[264665]: 2025-10-12 21:35:58.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:35:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:35:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:35:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:35:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:35:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:35:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:35:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:35:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:35:59.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:00.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:01.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:01 np0005481680 nova_compute[264665]: 2025-10-12 21:36:01.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:02] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:02] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:03 np0005481680 nova_compute[264665]: 2025-10-12 21:36:03.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:36:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:36:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:03.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:04.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:05.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:06.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:06 np0005481680 nova_compute[264665]: 2025-10-12 21:36:06.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:07.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:36:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:07.282Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:07.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:07.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:08 np0005481680 nova_compute[264665]: 2025-10-12 21:36:08.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:36:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:08.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:36:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:08.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:09 np0005481680 podman[289309]: 2025-10-12 21:36:09.175420122 +0000 UTC m=+0.126484468 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 12 17:36:09 np0005481680 podman[289310]: 2025-10-12 21:36:09.240841796 +0000 UTC m=+0.184335850 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 12 17:36:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:09.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:11 np0005481680 nova_compute[264665]: 2025-10-12 21:36:11.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:36:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:36:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:13 np0005481680 nova_compute[264665]: 2025-10-12 21:36:13.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:13.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:13 np0005481680 systemd-logind[783]: New session 58 of user zuul.
Oct 12 17:36:13 np0005481680 systemd[1]: Started Session 58 of User zuul.
Oct 12 17:36:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:14.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:36:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:36:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:16.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:16 np0005481680 nova_compute[264665]: 2025-10-12 21:36:16.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:16 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16599 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16608 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25747 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:17.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25753 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16620 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:17 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25762 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:18 np0005481680 nova_compute[264665]: 2025-10-12 21:36:18.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 12 17:36:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1006033135' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:36:18
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta']
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:36:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:36:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:36:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:36:18.372 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:36:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:36:18.373 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:36:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:36:18.373 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:18.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:36:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:36:19 np0005481680 podman[289620]: 2025-10-12 21:36:19.172306661 +0000 UTC m=+0.126316354 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 12 17:36:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:36:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:36:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:21 np0005481680 nova_compute[264665]: 2025-10-12 21:36:21.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:22] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:36:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:22] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:36:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:23 np0005481680 nova_compute[264665]: 2025-10-12 21:36:23.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:24 np0005481680 ovs-vsctl[289775]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 12 17:36:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:25 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 12 17:36:25 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 12 17:36:25 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 12 17:36:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: cache status {prefix=cache status} (starting...)
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:26 np0005481680 podman[290100]: 2025-10-12 21:36:26.245991411 +0000 UTC m=+0.066137473 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: client ls {prefix=client ls} (starting...)
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:26 np0005481680 lvm[290161]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:36:26 np0005481680 lvm[290161]: VG ceph_vg0 finished
Oct 12 17:36:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:36:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:36:26 np0005481680 kernel: block dm-0: the capability attribute has been deprecated.
Oct 12 17:36:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:26.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26051 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:26 np0005481680 nova_compute[264665]: 2025-10-12 21:36:26.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25789 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: damage ls {prefix=damage ls} (starting...)
Oct 12 17:36:26 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26063 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16659 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump loads {prefix=dump loads} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272408689' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25810 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:27.285Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:36:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:27.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26084 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16686 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089066022' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25828 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16710 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 12 17:36:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241833249' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 12 17:36:27 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26111 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:28 np0005481680 nova_compute[264665]: 2025-10-12 21:36:28.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25852 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: ops {prefix=ops} (starting...)
Oct 12 17:36:28 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16731 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381203916' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16746 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:28.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787399488' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: session ls {prefix=session ls} (starting...)
Oct 12 17:36:28 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:36:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16764 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:36:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:36:28 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: status {prefix=status} (starting...)
Oct 12 17:36:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16779 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418604133' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 12 17:36:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913564777' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.636770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989636814, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1513, "num_deletes": 507, "total_data_size": 2170502, "memory_usage": 2216320, "flush_reason": "Manual Compaction"}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989646201, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2112081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32691, "largest_seqno": 34203, "table_properties": {"data_size": 2105558, "index_size": 3146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17538, "raw_average_key_size": 19, "raw_value_size": 2090124, "raw_average_value_size": 2281, "num_data_blocks": 138, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304881, "oldest_key_time": 1760304881, "file_creation_time": 1760304989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9460 microseconds, and 3893 cpu microseconds.
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.646235) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2112081 bytes OK
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.646250) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.648265) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.648279) EVENT_LOG_v1 {"time_micros": 1760304989648275, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.648294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2162824, prev total WAL file size 2162824, number of live WAL files 2.
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.649016) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2062KB)], [71(13MB)]
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989649051, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16650545, "oldest_snapshot_seqno": -1}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813209038' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6148 keys, 14391145 bytes, temperature: kUnknown
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989715115, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14391145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14349677, "index_size": 24974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 161512, "raw_average_key_size": 26, "raw_value_size": 14238353, "raw_average_value_size": 2315, "num_data_blocks": 985, "num_entries": 6148, "num_filter_entries": 6148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760304989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.715306) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14391145 bytes
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.716844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.9 rd, 217.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 7177, records dropped: 1029 output_compression: NoCompression
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.716862) EVENT_LOG_v1 {"time_micros": 1760304989716853, "job": 40, "event": "compaction_finished", "compaction_time_micros": 66096, "compaction_time_cpu_micros": 25023, "output_level": 6, "num_output_files": 1, "total_output_size": 14391145, "num_input_records": 7177, "num_output_records": 6148, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989717322, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760304989720091, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.648964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.720118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.720121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.720123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.720124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:36:29.720125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:36:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26162 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:29 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:36:29.780+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:29 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25885 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135517643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542366564' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25912 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16824 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:36:30.413+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:30 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395964433' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624095786' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 12 17:36:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644550536' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375801703' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489805363' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 12 17:36:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:31.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:31 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:31 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25960 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:31 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:36:31.726+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:31 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 12 17:36:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618817017' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 12 17:36:31 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:31 np0005481680 nova_compute[264665]: 2025-10-12 21:36:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:32] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:32] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26243 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16887 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 12 17:36:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3746346600' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 12 17:36:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16899 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.25993 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 12 17:36:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478930139' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 827392 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 819200 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 819200 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 819200 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 811008 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 811008 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 802816 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 802816 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 802816 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 786432 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 786432 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 778240 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 778240 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 778240 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 770048 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 770048 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 761856 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 761856 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 761856 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 753664 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 753664 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 745472 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 745472 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 745472 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 737280 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 737280 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b4c00 session 0x55d255ee54a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 729088 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 729088 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 729088 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 720896 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 720896 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 712704 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 712704 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 712704 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 696320 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889671 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76857344 unmapped: 696320 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 688128 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 62.717281342s of 62.721389771s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76865536 unmapped: 688128 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 679936 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 679936 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889803 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 655360 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76898304 unmapped: 655360 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 647168 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 622592 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 614400 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892843 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 614400 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 614400 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 606208 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 606208 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 606208 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892236 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.952752113s of 12.999465942s, submitted: 13
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 589824 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 581632 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 581632 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892104 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 573440 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 565248 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892104 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 557056 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 548864 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892104 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 540672 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d257d8e960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 532480 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892104 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 524288 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 516096 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 516096 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 516096 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892104 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 507904 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.530668259s of 28.534683228s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 499712 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892236 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 491520 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 491520 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 491520 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 483328 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892252 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 450560 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 442368 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890902 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 434176 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 425984 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.608840942s of 14.652837753s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890922 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 409600 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2558b3c00 session 0x55d258499a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 401408 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 393216 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890922 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d254cff800 session 0x55d258062d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 385024 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890922 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 376832 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7035 writes, 29K keys, 7035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7035 writes, 1264 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7035 writes, 29K keys, 7035 commit groups, 1.0 writes per commit group, ingest: 20.30 MB, 0.03 MB/s#012Interval WAL: 7035 writes, 1264 syncs, 5.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 311296 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.108342171s of 15.111491203s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891054 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 303104 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 278528 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 262144 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891202 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 245760 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 237568 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891218 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.073250771s of 12.141806602s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 229376 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 221184 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892566 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 212992 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 196608 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 196608 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 196608 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 188416 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 188416 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 180224 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 180224 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 180224 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 172032 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 163840 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 155648 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 147456 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 139264 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 131072 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 122880 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d258073860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 114688 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 106496 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d6000 session 0x55d25807e3c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 98304 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 81920 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 73728 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 65536 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892434 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.913425446s of 44.949539185s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 57344 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 40960 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 32768 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892714 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 24576 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 0 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895722 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1032192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.468067169s of 11.525839806s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1007616 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895606 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894867 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 958464 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 950272 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894867 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b4c00 session 0x55d257d8eb40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894867 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 909312 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894867 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.265607834s of 25.303537369s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894999 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896527 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895920 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.023246765s of 16.073017120s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.181978226s of 20.186208725s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 1466368 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 147456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 114688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d25647e1e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 106496 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 98304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 65536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895788 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.203287125s of 21.889471054s, submitted: 206
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79609856 unmapped: 40960 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898960 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 16384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1032192 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898960 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.012138367s of 12.057500839s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1048576 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898353 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1040384 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1024000 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d6400 session 0x55d257fc23c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898221 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 66.225624084s of 67.823158264s, submitted: 2
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898353 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1015808 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899881 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899274 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.016518593s of 12.090629578s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1007616 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2558b3c00 session 0x55d2581c50e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898551 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.471771240s of 35.558815002s, submitted: 2
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898683 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 999424 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898699 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.698478699s of 10.050379753s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898399 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b4c00 session 0x55d2570efa40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.180503845s of 42.293552399s, submitted: 3
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897828 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 933888 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d258073860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.311359406s of 60.363491058s, submitted: 13
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899604 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901132 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d257da2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901132 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.569141388s of 15.600605011s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 900832 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902644 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904156 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.944058418s of 10.986461639s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d7400 session 0x55d255f090e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903549 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d2578fdc20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.344142914s of 29.355075836s, submitted: 3
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903565 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902806 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.005134583s of 12.044654846s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2583081e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 1695744 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 71.363334656s of 71.371582031s, submitted: 2
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902367 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1662976 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902383 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902383 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.147457123s of 13.186235428s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902083 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7730 writes, 30K keys, 7730 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7730 writes, 1603 syncs, 4.82 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 695 writes, 1209 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 695 writes, 339 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d6000 session 0x55d25807fe00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d2563fd2c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 89.622047424s of 89.626823425s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902515 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1433600 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1392640 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902515 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.022595406s of 11.070187569s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902367 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d2563f74a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.939117432s of 28.950923920s, submitted: 3
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650b400 session 0x55d2588e21e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.930031776s of 14.969452858s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903747 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903879 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.807968140s of 11.821829796s, submitted: 3
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905407 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d257fc23c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904648 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.167273521s of 10.208182335s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988450050s of 10.001911163s, submitted: 3
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d258498f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 180224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 172032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.289288521s of 10.427827835s, submitted: 17
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 106496 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1064960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904780 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 1015808 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 983040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 983040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 966656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.564165115s of 12.723128319s, submitted: 211
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 925696 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000067s
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2588e2d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d25647e780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 909312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 884736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.594898224s of 63.763050079s, submitted: 2
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906460 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 811008 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906460 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.920207977s of 13.966058731s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906328 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 778240 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2558b3c00 session 0x55d257fc2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 83.397789001s of 83.438568115s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.383620262s of 15.857924461s, submitted: 9
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d2588f05a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905437 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905589 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514731407s of 13.519078255s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905721 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907249 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2589712c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.442175865s of 14.482097626s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908461 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486126900s of 11.537532806s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907563 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d257fc3a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.445034027s of 58.675991058s, submitted: 4
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916064 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 663552 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 138 ms_handle_reset con 0x55d2570d6000 session 0x55d2564ca000
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc668000/0x0/0x4ffc00000, data 0xf559d/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 10952704 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 139 ms_handle_reset con 0x55d2570d6000 session 0x55d256453680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0x5676d8/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 139 ms_handle_reset con 0x55d25650b400 session 0x55d2564cbc20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957955 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f0000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 10928128 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960845 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 10928128 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486613274s of 11.676462173s, submitted: 57
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960021 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960153 data_alloc: 218103808 data_used: 98304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.504811287s of 10.536725044s, submitted: 8
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 11018240 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961517 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 11018240 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961517 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.018886566s of 10.053460121s, submitted: 8
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961385 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961385 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 ms_handle_reset con 0x55d25650a000 session 0x55d2563650e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 ms_handle_reset con 0x55d25650a400 session 0x55d257fc2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961537 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.855613708s of 13.859876633s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 11001856 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a800 session 0x55d258498960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 10756096 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 10756096 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a000 session 0x55d2580730e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbfd7000/0x0/0x4ffc00000, data 0x7808d6/0x833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989469 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a400 session 0x55d2586854a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650b400 session 0x55d257f6cb40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d258072960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10412032 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10412032 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 10207232 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 9633792 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008265 data_alloc: 218103808 data_used: 2273280
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008265 data_alloc: 218103808 data_used: 2273280
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.823438644s of 18.955329895s, submitted: 37
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 3637248 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037537 data_alloc: 218103808 data_used: 3076096
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 1089536 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbd7f000/0x0/0x4ffc00000, data 0x9d38a8/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039589 data_alloc: 218103808 data_used: 3117056
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305800 session 0x55d258a48f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25893af00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.227094650s of 23.355901718s, submitted: 39
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257def860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 7979008 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86d000/0x0/0x4ffc00000, data 0xd4b8a8/0xdff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 7979008 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 7897088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 3186688
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d257dee5a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86d000/0x0/0x4ffc00000, data 0xd4b8a8/0xdff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 7897088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d257dee780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305c00 session 0x55d2578a21e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25788c1e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069989 data_alloc: 218103808 data_used: 3186688
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93814784 unmapped: 7872512 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 6242304 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084581 data_alloc: 218103808 data_used: 5349376
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5881856 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084581 data_alloc: 218103808 data_used: 5349376
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95854592 unmapped: 5832704 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.023689270s of 21.075399399s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 4456448 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa346000/0x0/0x4ffc00000, data 0x12718b8/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97271808 unmapped: 4415488 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa2eb000/0x0/0x4ffc00000, data 0x12cb8b8/0x1380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135165 data_alloc: 218103808 data_used: 5574656
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2588d2d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 4374528 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133773 data_alloc: 218103808 data_used: 5566464
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 4251648 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa24c000/0x0/0x4ffc00000, data 0x136b8b8/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 4251648 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d25893a960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d2564cb2c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa24c000/0x0/0x4ffc00000, data 0x136b8b8/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97443840 unmapped: 4243456 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d2588f0780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046014 data_alloc: 218103808 data_used: 3174400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.008966446s of 13.345650673s, submitted: 79
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d25893a780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257390000 session 0x55d2570ed4a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047674 data_alloc: 218103808 data_used: 3170304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257d8ed20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 7143424 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 7143424 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982639 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982032 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.694840431s of 15.787263870s, submitted: 17
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981900 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d258956960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d25893a1e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d255f08b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2580623c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000614 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257fc3e00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257390000 session 0x55d257da3c20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d256364f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d257da25a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2584983c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d258970d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007440 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007440 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 9150464 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025224 data_alloc: 218103808 data_used: 2793472
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.343212128s of 23.383264542s, submitted: 8
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 9469952 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 9469952 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024988 data_alloc: 218103808 data_used: 2801664
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 8192000 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96108544 unmapped: 7749632 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 5799936 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 5799936 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 4751360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060953 data_alloc: 218103808 data_used: 2981888
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061105 data_alloc: 218103808 data_used: 2985984
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.644647598s of 15.819246292s, submitted: 49
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5783552 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8910 writes, 33K keys, 8910 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8910 writes, 2141 syncs, 4.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1180 writes, 2991 keys, 1180 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s#012Interval WAL: 1180 writes, 538 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5783552 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060973 data_alloc: 218103808 data_used: 2985984
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060973 data_alloc: 218103808 data_used: 2985984
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258017680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2563645a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.520689011s of 13.525539398s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 5758976 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987063 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2563fcb40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987063 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987195 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.768692970s of 12.803792000s, submitted: 10
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96198656 unmapped: 7659520 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96198656 unmapped: 7659520 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96206848 unmapped: 7651328 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988723 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96215040 unmapped: 7643136 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d2581c41e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25807fe00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258970f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d257da34a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588e2b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d254a8c780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003815 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2563f61e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6d000/0x0/0x4ffc00000, data 0x64a90a/0x6ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.318846703s of 12.507612228s, submitted: 44
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d258308000
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004804 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6d000/0x0/0x4ffc00000, data 0x64a90a/0x6ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 7700480 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 7700480 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010968 data_alloc: 218103808 data_used: 921600
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.152463913s of 10.198073387s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011132 data_alloc: 218103808 data_used: 917504
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 7520256 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 7520256 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faecd000/0x0/0x4ffc00000, data 0x6e992d/0x79f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1,1,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 6111232 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa74f000/0x0/0x4ffc00000, data 0xe6192d/0xf17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa713000/0x0/0x4ffc00000, data 0xe9b92d/0xf51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086070 data_alloc: 218103808 data_used: 1970176
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa71b000/0x0/0x4ffc00000, data 0xe9b92d/0xf51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079950 data_alloc: 218103808 data_used: 1974272
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26279 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa718000/0x0/0x4ffc00000, data 0xe9e92d/0xf54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa718000/0x0/0x4ffc00000, data 0xe9e92d/0xf54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079950 data_alloc: 218103808 data_used: 1974272
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.148156166s of 15.456823349s, submitted: 125
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa717000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa717000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9400 session 0x55d2563f6d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d257176b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2575272c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d258685860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 13008896 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d257177c20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9400 session 0x55d2583083c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d257da2f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257da25a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257da3860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa041000/0x0/0x4ffc00000, data 0x157493d/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131730 data_alloc: 218103808 data_used: 1974272
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa041000/0x0/0x4ffc00000, data 0x157493d/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d257da2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2f800 session 0x55d2563f8b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 13017088 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133517 data_alloc: 218103808 data_used: 1978368
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 13017088 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101785600 unmapped: 10043392 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181397 data_alloc: 218103808 data_used: 9056256
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 7798784 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 7798784 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.626377106s of 18.689212799s, submitted: 15
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 7766016 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181989 data_alloc: 218103808 data_used: 9060352
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 7766016 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 4923392 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,10])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110370816 unmapped: 3637248 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 3432448 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed960/0x1fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260019 data_alloc: 234881024 data_used: 10125312
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed960/0x1fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258859 data_alloc: 234881024 data_used: 10125312
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2575c4d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.864350319s of 11.624783516s, submitted: 79
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257837 data_alloc: 234881024 data_used: 10121216
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed950/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 9945088 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eed950/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,0,6])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d25893a000
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 10362880 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091164 data_alloc: 218103808 data_used: 1978368
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f950/0xf56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1.173839927s of 10.133413315s, submitted: 14
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 10354688 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2570ecd20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090609 data_alloc: 218103808 data_used: 1974272
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258957e00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090245 data_alloc: 218103808 data_used: 1974272
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.528004646s of 10.041405678s, submitted: 43
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 12869632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8cb/0x624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d258309a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 12795904 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010625 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010349 data_alloc: 218103808 data_used: 102400
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6800 session 0x55d254a8cb40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b4c00 session 0x55d258308d20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.167170525s of 10.927146912s, submitted: 18
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25560a400 session 0x55d2564dc780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564521e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 12713984 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564ca780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2581c45a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.099966049s of 17.958248138s, submitted: 233
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99876864 unmapped: 14131200 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022171 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d2581c4f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2581c52c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2583090e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022187 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255f09860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027416 data_alloc: 218103808 data_used: 724992
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.301165581s of 14.841221809s, submitted: 29
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027500 data_alloc: 218103808 data_used: 729088
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102293504 unmapped: 12763136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102293504 unmapped: 12763136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050564 data_alloc: 218103808 data_used: 946176
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 12673024 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabfd000/0x0/0x4ffc00000, data 0x9b992d/0xa6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057846 data_alloc: 218103808 data_used: 1101824
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101400576 unmapped: 13656064 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069133759s of 12.610257149s, submitted: 56
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25807e960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabdc000/0x0/0x4ffc00000, data 0x9da92d/0xa90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabdc000/0x0/0x4ffc00000, data 0x9da92d/0xa90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056818 data_alloc: 218103808 data_used: 1105920
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057218 data_alloc: 218103808 data_used: 1110016
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x9e092d/0xa96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x9e092d/0xa96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.912015915s of 10.996917725s, submitted: 5
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057610 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 13492224 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9e392d/0xa99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 13492224 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 13484032 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9e392d/0xa99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058178 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d2588cbc20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588ca5a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573a5400 session 0x55d2588cab40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2588cb4a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588cb680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588ca3c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d255f3ed20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258304400 session 0x55d255f3e3c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255f3f860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 19628032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 19628032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 19619840 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080050 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 19619840 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.358315468s of 13.568110466s, submitted: 6
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257fc25a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2589572c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d257e17860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d25793f680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2563f83c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082184 data_alloc: 218103808 data_used: 1122304
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0xcf194c/0xda9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcf494c/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102916 data_alloc: 218103808 data_used: 4268032
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: mgrc ms_handle_reset ms_handle_reset con 0x55d25650b800
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3916108464
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3916108464,v1:192.168.122.100:6801/3916108464]
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: mgrc handle_mgr_configure stats_period=5
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.356122017s of 12.378782272s, submitted: 6
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0xcfa94c/0xdb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103484 data_alloc: 218103808 data_used: 4268032
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 18481152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 18481152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 17809408 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5be000/0x0/0x4ffc00000, data 0xff694c/0x10ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 17809408 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125352 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0xffd94c/0x10b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.289596558s of 11.366083145s, submitted: 17
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127274 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5b4000/0x0/0x4ffc00000, data 0x100094c/0x10b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127142 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 17752064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 17752064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127182 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.827489853s of 11.043312073s, submitted: 5
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5aa000/0x0/0x4ffc00000, data 0x100a94c/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127806 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127694 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.485318184s of 11.773607254s, submitted: 4
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127782 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a4000/0x0/0x4ffc00000, data 0x101094c/0x10c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127822 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59f000/0x0/0x4ffc00000, data 0x101594c/0x10cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127822 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.699817657s of 12.715748787s, submitted: 4
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59c000/0x0/0x4ffc00000, data 0x101894c/0x10d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59c000/0x0/0x4ffc00000, data 0x101894c/0x10d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128430 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 17629184 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa597000/0x0/0x4ffc00000, data 0x101d94c/0x10d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128334 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.147480965s of 11.409416199s, submitted: 4
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa597000/0x0/0x4ffc00000, data 0x101d94c/0x10d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128422 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa594000/0x0/0x4ffc00000, data 0x102094c/0x10d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa592000/0x0/0x4ffc00000, data 0x102294c/0x10da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128446 data_alloc: 218103808 data_used: 4333568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d257fc3680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25608c3c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa591000/0x0/0x4ffc00000, data 0x102394c/0x10db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.306691170s of 12.825207710s, submitted: 5
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2588e3c20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065426 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab8f000/0x0/0x4ffc00000, data 0xa2693c/0xadd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2578a2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064381 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f0f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.254345894s of 10.118084908s, submitted: 12
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064309 data_alloc: 218103808 data_used: 1118208
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 20545536 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257177860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019599 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019599 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.282787323s of 14.191265106s, submitted: 20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d258073a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588f1e00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588cad20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d258308b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.678023338s of 13.685975075s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2588d30e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25893a5a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d25807e5a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d25793fe00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2571761e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066720 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257176f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d257176b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257177680
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e2b40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079032 data_alloc: 218103808 data_used: 1806336
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 18915328 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e3a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 18882560 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.982872009s of 10.151477814s, submitted: 46
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2584981e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024870 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024870 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 18309120 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d258309c20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fae39000/0x0/0x4ffc00000, data 0x77f8a8/0x833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039122 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.847422600s of 15.979025841s, submitted: 2
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588e25a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faa28000/0x0/0x4ffc00000, data 0x77f8cb/0x834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 20455424 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 20455424 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040927 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d254adda40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faa28000/0x0/0x4ffc00000, data 0x77f8cb/0x834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d258062780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.972145081s of 32.033672333s, submitted: 16
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d255f090e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 20832256 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057517 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083205 data_alloc: 218103808 data_used: 3915776
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101187584 unmapped: 20168704 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083205 data_alloc: 218103808 data_used: 3915776
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.084077835s of 18.120235443s, submitted: 5
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 15065088 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 15048704 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124313 data_alloc: 218103808 data_used: 4767744
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129111 data_alloc: 218103808 data_used: 4808704
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130039 data_alloc: 218103808 data_used: 4833280
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255ee4960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.602562904s of 15.801031113s, submitted: 58
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588cba40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.333866119s of 22.364822388s, submitted: 7
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f1860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080521 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f1e00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 22781952 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 22781952 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 21233664 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128839 data_alloc: 218103808 data_used: 6434816
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 21037056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 21037056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105668608 unmapped: 21012480 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129143 data_alloc: 218103808 data_used: 6492160
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.974012375s of 18.638038635s, submitted: 19
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18014208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162431 data_alloc: 218103808 data_used: 6524928
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 15548416 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e58000/0x0/0x4ffc00000, data 0x13308cb/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257da2f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203685 data_alloc: 218103808 data_used: 6742016
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 14786560 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 14786560 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 14778368 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 14778368 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2575c63c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 14475264 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208050 data_alloc: 218103808 data_used: 6742016
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.759752274s of 11.006592751s, submitted: 72
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 14737408 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 14737408 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 14180352 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219126 data_alloc: 218103808 data_used: 8359936
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219126 data_alloc: 218103808 data_used: 8359936
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.875556946s of 10.879505157s, submitted: 1
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14327808 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b9a000/0x0/0x4ffc00000, data 0x160d8cb/0x16c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 14254080 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248186 data_alloc: 234881024 data_used: 9187328
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248186 data_alloc: 234881024 data_used: 9187328
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 14041088 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.649332047s of 10.708267212s, submitted: 15
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 14548992 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 14548992 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b10000/0x0/0x4ffc00000, data 0x16968cb/0x174b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248226 data_alloc: 234881024 data_used: 9256960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b10000/0x0/0x4ffc00000, data 0x16968cb/0x174b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08400 session 0x55d25899e000
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08000 session 0x55d2578a3860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 16490496 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257fc3860
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196350 data_alloc: 218103808 data_used: 6742016
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e76000/0x0/0x4ffc00000, data 0x13318cb/0x13e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e76000/0x0/0x4ffc00000, data 0x13318cb/0x13e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2581c4f00
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.367655754s of 12.646665573s, submitted: 21
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2588ca960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196218 data_alloc: 218103808 data_used: 6742016
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 21921792 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e2960
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.658111572s of 27.196340561s, submitted: 36
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2563f74a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa915000/0x0/0x4ffc00000, data 0x8938a8/0x947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071844 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2581c52c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d255197a40
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564dcd20
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588d2780
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 21741568 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 21741568 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075127 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096863 data_alloc: 218103808 data_used: 3309568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096863 data_alloc: 218103808 data_used: 3309568
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.147645950s of 18.915796280s, submitted: 11
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 16400384 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109608960 unmapped: 17072128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa405000/0x0/0x4ffc00000, data 0xda18db/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 16769024 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0xda78db/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145195 data_alloc: 218103808 data_used: 3702784
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145211 data_alloc: 218103808 data_used: 3702784
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145211 data_alloc: 218103808 data_used: 3702784
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08400 session 0x55d257e165a0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2570ee3c0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.207839966s of 16.376241684s, submitted: 58
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2564de1e0
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3002 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2014 writes, 6494 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 7.68 MB, 0.01 MB/s#012Interval WAL: 2014 writes, 861 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 19111936 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}'
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}'
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107708416 unmapped: 18972672 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}'
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}'
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 18989056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 19013632 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:36:32 np0005481680 ceph-osd[81892]: do_command 'log dump' '{prefix=log dump}'
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26291 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979932787' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26011 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 nova_compute[264665]: 2025-10-12 21:36:33.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:33 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804179859' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16938 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26020 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26333 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16956 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 12 17:36:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625266072' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 12 17:36:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26035 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16965 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26059 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:34.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26357 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16980 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26077 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26381 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.16998 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 12 17:36:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493465815' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26399 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26092 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17013 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26411 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26107 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 12 17:36:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205380354' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 12 17:36:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534660982' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 12 17:36:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26128 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959235349' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 12 17:36:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:36.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26152 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:36 np0005481680 nova_compute[264665]: 2025-10-12 21:36:36.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011046618' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 12 17:36:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164877564' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 12 17:36:37 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26158 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780676243' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 12 17:36:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:37.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115923800' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 12 17:36:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319696724' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 12 17:36:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556252941' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 12 17:36:37 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26507 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261552686' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 12 17:36:38 np0005481680 nova_compute[264665]: 2025-10-12 21:36:38.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925869839' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 12 17:36:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26531 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:38 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 17:36:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17124 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:38 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 12 17:36:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304680684' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 12 17:36:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:36:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:38.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:38.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17142 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26576 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 12 17:36:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30180896' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17172 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26615 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 12 17:36:39 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428403137' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 12 17:36:39 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17190 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:40 np0005481680 podman[292248]: 2025-10-12 21:36:40.125322696 +0000 UTC m=+0.072882154 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, tcib_managed=true)
Oct 12 17:36:40 np0005481680 podman[292249]: 2025-10-12 21:36:40.162470771 +0000 UTC m=+0.115308724 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26633 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17211 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700848706' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 12 17:36:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:40.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26660 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26287 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17220 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26293 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 12 17:36:40 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515436819' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 12 17:36:40 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26678 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26305 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26314 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 12 17:36:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57032929' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26711 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17265 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26329 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:41 np0005481680 nova_compute[264665]: 2025-10-12 21:36:41.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26735 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:41 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:42] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:42] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:42 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26353 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 12 17:36:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394296413' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 12 17:36:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:42 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26371 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:42 np0005481680 nova_compute[264665]: 2025-10-12 21:36:42.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:42 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17319 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:43 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26386 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:43 np0005481680 nova_compute[264665]: 2025-10-12 21:36:43.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203888629' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 12 17:36:43 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:43.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 12 17:36:43 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737464850' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 12 17:36:43 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26807 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 12 17:36:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628885387' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 12 17:36:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:44 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 12 17:36:44 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2754704682' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 12 17:36:44 np0005481680 nova_compute[264665]: 2025-10-12 21:36:44.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:44 np0005481680 nova_compute[264665]: 2025-10-12 21:36:44.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:44 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26837 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:45 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17370 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:45 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26846 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 12 17:36:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021449290' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.690 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:36:45 np0005481680 nova_compute[264665]: 2025-10-12 21:36:45.690 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:36:45 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26867 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 12 17:36:45 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389614626' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367644726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.255 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:36:46 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26873 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:46 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17406 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.409 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.411 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4344MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.411 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.411 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:36:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.511 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.511 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.529 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:36:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489101660' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:36:46 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307389853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.946 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.951 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.968 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.970 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:36:46 np0005481680 nova_compute[264665]: 2025-10-12 21:36:46.970 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26485 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17427 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:47.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26906 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:47.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17436 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26912 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:47 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:47 np0005481680 nova_compute[264665]: 2025-10-12 21:36:47.971 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:47 np0005481680 nova_compute[264665]: 2025-10-12 21:36:47.972 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:47 np0005481680 nova_compute[264665]: 2025-10-12 21:36:47.972 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:36:47 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 12 17:36:47 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978331478' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 12 17:36:48 np0005481680 nova_compute[264665]: 2025-10-12 21:36:48.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:36:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26506 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 12 17:36:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95411724' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:36:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:48 np0005481680 nova_compute[264665]: 2025-10-12 21:36:48.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17481 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:48 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26957 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:48.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:48.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26533 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17487 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ovs-appctl[294046]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:49 np0005481680 ovs-appctl[294061]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26966 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:49 np0005481680 ovs-appctl[294076]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 12 17:36:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.005000123s ======
Oct 12 17:36:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:49.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000123s
Oct 12 17:36:49 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26539 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 12 17:36:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398248107' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 12 17:36:49 np0005481680 nova_compute[264665]: 2025-10-12 21:36:49.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 12 17:36:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646163229' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 12 17:36:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:50 np0005481680 podman[294377]: 2025-10-12 21:36:50.127475735 +0000 UTC m=+0.083888946 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:36:50 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17514 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:50.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:50 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17523 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:50 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26566 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:50 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26572 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:51 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615247955' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 12 17:36:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:51.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224014111' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:36:51 np0005481680 nova_compute[264665]: 2025-10-12 21:36:51.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:36:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:52] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:36:52] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580841031' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:52 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17562 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:52.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:52 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26593 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:52 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26605 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 12 17:36:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493270532' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27053 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 nova_compute[264665]: 2025-10-12 21:36:53.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:36:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1227333468' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 12 17:36:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983492871' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:36:54 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27077 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 12 17:36:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775827912' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.490788945 +0000 UTC m=+0.067319824 container create 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:36:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:54.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:54 np0005481680 systemd[1]: Started libpod-conmon-17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7.scope.
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.464213878 +0000 UTC m=+0.040744757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:54 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.597782167 +0000 UTC m=+0.174313126 container init 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.608720605 +0000 UTC m=+0.185251484 container start 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.615903208 +0000 UTC m=+0.192434127 container attach 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:36:54 np0005481680 naughty_brahmagupta[295884]: 167 167
Oct 12 17:36:54 np0005481680 systemd[1]: libpod-17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7.scope: Deactivated successfully.
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.618999746 +0000 UTC m=+0.195530655 container died 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Oct 12 17:36:54 np0005481680 nova_compute[264665]: 2025-10-12 21:36:54.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:54 np0005481680 nova_compute[264665]: 2025-10-12 21:36:54.666 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:36:54 np0005481680 nova_compute[264665]: 2025-10-12 21:36:54.666 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:36:54 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5a6619160714688021888ee6dd4eafb1ec92a7e1ead3f8d24d7938efb0d08874-merged.mount: Deactivated successfully.
Oct 12 17:36:54 np0005481680 nova_compute[264665]: 2025-10-12 21:36:54.684 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:36:54 np0005481680 podman[295852]: 2025-10-12 21:36:54.698375227 +0000 UTC m=+0.274906136 container remove 17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:36:54 np0005481680 systemd[1]: libpod-conmon-17232e38587b5e6aae98572b765e0c0a2bac16fb67e9444ea5cd70ea69fb34f7.scope: Deactivated successfully.
Oct 12 17:36:54 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17610 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:54 np0005481680 podman[295930]: 2025-10-12 21:36:54.970772009 +0000 UTC m=+0.088747550 container create 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:36:55 np0005481680 systemd[1]: Started libpod-conmon-2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514.scope.
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:54.931441458 +0000 UTC m=+0.049417039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:36:55 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:55 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:55.113062489 +0000 UTC m=+0.231038060 container init 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:55.121850683 +0000 UTC m=+0.239826194 container start 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:55.125319781 +0000 UTC m=+0.243295372 container attach 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:36:55 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27098 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 12 17:36:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819315644' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 12 17:36:55 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26641 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:55 np0005481680 nifty_allen[295981]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:36:55 np0005481680 nifty_allen[295981]: --> All data devices are unavailable
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:55.563839479 +0000 UTC m=+0.681814990 container died 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 12 17:36:55 np0005481680 systemd[1]: libpod-2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514.scope: Deactivated successfully.
Oct 12 17:36:55 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27107 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:55 np0005481680 systemd[1]: var-lib-containers-storage-overlay-25c2ff2acfcf6fd57b1b348c52433422d78168ddbd3108ed58576a8a2553d026-merged.mount: Deactivated successfully.
Oct 12 17:36:55 np0005481680 podman[295930]: 2025-10-12 21:36:55.613923204 +0000 UTC m=+0.731898695 container remove 2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 17:36:55 np0005481680 systemd[1]: libpod-conmon-2525bd687deb6b1fbeae3451a2c8ca8bfa2af19009662845bc90b21cfd5be514.scope: Deactivated successfully.
Oct 12 17:36:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 12 17:36:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042333200' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 12 17:36:56 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17631 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.414853085 +0000 UTC m=+0.093862380 container create ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:36:56 np0005481680 systemd[1]: Started libpod-conmon-ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca.scope.
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.374912529 +0000 UTC m=+0.053921924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:36:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.511199387 +0000 UTC m=+0.190208742 container init ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.524965787 +0000 UTC m=+0.203975112 container start ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.529288947 +0000 UTC m=+0.208298272 container attach ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 12 17:36:56 np0005481680 angry_clarke[296206]: 167 167
Oct 12 17:36:56 np0005481680 systemd[1]: libpod-ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca.scope: Deactivated successfully.
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.533357701 +0000 UTC m=+0.212367026 container died ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:36:56 np0005481680 podman[296203]: 2025-10-12 21:36:56.54238103 +0000 UTC m=+0.077259907 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 12 17:36:56 np0005481680 systemd[1]: var-lib-containers-storage-overlay-38c0120d01ae0a7255d915dcdae2502aab56a41e309bcc3077d4f9f25c821be5-merged.mount: Deactivated successfully.
Oct 12 17:36:56 np0005481680 podman[296170]: 2025-10-12 21:36:56.579085894 +0000 UTC m=+0.258095179 container remove ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 12 17:36:56 np0005481680 systemd[1]: libpod-conmon-ceae3bffa996e66f30f85ceffb2431098bd00ba2546cbb6668a98ce5c6fd5bca.scope: Deactivated successfully.
Oct 12 17:36:56 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 12 17:36:56 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998839016' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 12 17:36:56 np0005481680 nova_compute[264665]: 2025-10-12 21:36:56.679 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:36:56 np0005481680 podman[296251]: 2025-10-12 21:36:56.783830954 +0000 UTC m=+0.072675670 container create 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 12 17:36:56 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:56 np0005481680 systemd[1]: Started libpod-conmon-55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4.scope.
Oct 12 17:36:56 np0005481680 nova_compute[264665]: 2025-10-12 21:36:56.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:56 np0005481680 podman[296251]: 2025-10-12 21:36:56.753987764 +0000 UTC m=+0.042832510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:56 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de47cf5bd56242ec751926225a3e3d3ee6df5c0d03cb7a464fdccd5e288f9f92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de47cf5bd56242ec751926225a3e3d3ee6df5c0d03cb7a464fdccd5e288f9f92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de47cf5bd56242ec751926225a3e3d3ee6df5c0d03cb7a464fdccd5e288f9f92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:56 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de47cf5bd56242ec751926225a3e3d3ee6df5c0d03cb7a464fdccd5e288f9f92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:56 np0005481680 podman[296251]: 2025-10-12 21:36:56.8709203 +0000 UTC m=+0.159765056 container init 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 12 17:36:56 np0005481680 podman[296251]: 2025-10-12 21:36:56.882146986 +0000 UTC m=+0.170991722 container start 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 17:36:56 np0005481680 podman[296251]: 2025-10-12 21:36:56.888151078 +0000 UTC m=+0.176995804 container attach 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17652 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]: {
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:    "0": [
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:        {
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "devices": [
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "/dev/loop3"
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            ],
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "lv_name": "ceph_lv0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "lv_size": "21470642176",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "name": "ceph_lv0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "tags": {
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.cluster_name": "ceph",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.crush_device_class": "",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.encrypted": "0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.osd_id": "0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.type": "block",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.vdo": "0",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:                "ceph.with_tpm": "0"
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            },
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "type": "block",
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:            "vg_name": "ceph_vg0"
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:        }
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]:    ]
Oct 12 17:36:57 np0005481680 zealous_perlman[296271]: }
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27143 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:57 np0005481680 systemd[1]: libpod-55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4.scope: Deactivated successfully.
Oct 12 17:36:57 np0005481680 podman[296251]: 2025-10-12 21:36:57.20612538 +0000 UTC m=+0.494970136 container died 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:36:57 np0005481680 systemd[1]: var-lib-containers-storage-overlay-de47cf5bd56242ec751926225a3e3d3ee6df5c0d03cb7a464fdccd5e288f9f92-merged.mount: Deactivated successfully.
Oct 12 17:36:57 np0005481680 podman[296251]: 2025-10-12 21:36:57.272792596 +0000 UTC m=+0.561637312 container remove 55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 17:36:57 np0005481680 systemd[1]: libpod-conmon-55dae9ac4ed9ab9dee4e7d194d19cd202611615dd11fbfab060aff2dd8a50fe4.scope: Deactivated successfully.
Oct 12 17:36:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:57.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:36:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:57.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:57 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17661 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:57 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 12 17:36:57 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027394048' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 podman[296468]: 2025-10-12 21:36:58.042302247 +0000 UTC m=+0.065831236 container create ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:36:58 np0005481680 systemd[1]: Started libpod-conmon-ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00.scope.
Oct 12 17:36:58 np0005481680 nova_compute[264665]: 2025-10-12 21:36:58.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:36:58 np0005481680 podman[296468]: 2025-10-12 21:36:58.016562472 +0000 UTC m=+0.040091551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:58 np0005481680 podman[296468]: 2025-10-12 21:36:58.135296854 +0000 UTC m=+0.158825853 container init ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:36:58 np0005481680 podman[296468]: 2025-10-12 21:36:58.144362454 +0000 UTC m=+0.167891433 container start ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 12 17:36:58 np0005481680 podman[296468]: 2025-10-12 21:36:58.149263099 +0000 UTC m=+0.172792138 container attach ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 12 17:36:58 np0005481680 hardcore_varahamihira[296503]: 167 167
Oct 12 17:36:58 np0005481680 systemd[1]: libpod-ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00.scope: Deactivated successfully.
Oct 12 17:36:58 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26680 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 podman[296508]: 2025-10-12 21:36:58.199562289 +0000 UTC m=+0.031975215 container died ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:36:58 np0005481680 systemd[1]: var-lib-containers-storage-overlay-8dae0fbb1fa1aa04d289f00b8e08ae852d89ed275cb353dec9e8540578131073-merged.mount: Deactivated successfully.
Oct 12 17:36:58 np0005481680 podman[296508]: 2025-10-12 21:36:58.2440079 +0000 UTC m=+0.076420786 container remove ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 12 17:36:58 np0005481680 systemd[1]: libpod-conmon-ca2bae0923c144e233b205f844dcb764765cb434ad14df4b823a90573caa0a00.scope: Deactivated successfully.
Oct 12 17:36:58 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 12 17:36:58 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973202851' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27176 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 podman[296535]: 2025-10-12 21:36:58.459420541 +0000 UTC m=+0.053812860 container create 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:36:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:36:58.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:36:58 np0005481680 systemd[1]: Started libpod-conmon-0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0.scope.
Oct 12 17:36:58 np0005481680 podman[296535]: 2025-10-12 21:36:58.429980132 +0000 UTC m=+0.024372501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:36:58 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:36:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7f4015c420c7398580ed4d246d002d3dacde3fbc91cfa0bac093e05ef8992/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7f4015c420c7398580ed4d246d002d3dacde3fbc91cfa0bac093e05ef8992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7f4015c420c7398580ed4d246d002d3dacde3fbc91cfa0bac093e05ef8992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:58 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7f4015c420c7398580ed4d246d002d3dacde3fbc91cfa0bac093e05ef8992/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:36:58 np0005481680 podman[296535]: 2025-10-12 21:36:58.558560994 +0000 UTC m=+0.152953383 container init 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:36:58 np0005481680 podman[296535]: 2025-10-12 21:36:58.567502181 +0000 UTC m=+0.161894540 container start 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:36:58 np0005481680 podman[296535]: 2025-10-12 21:36:58.572700594 +0000 UTC m=+0.167092953 container attach 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 12 17:36:58 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17688 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:36:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:36:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17694 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:36:59 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:36:59 np0005481680 beautiful_goodall[296572]: {}
Oct 12 17:36:59 np0005481680 podman[296711]: 2025-10-12 21:36:59.36987347 +0000 UTC m=+0.030205790 container died 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:36:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:36:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:36:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:36:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:00 np0005481680 systemd[1]: libpod-0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0.scope: Deactivated successfully.
Oct 12 17:37:00 np0005481680 systemd[1]: libpod-0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0.scope: Consumed 1.132s CPU time.
Oct 12 17:37:00 np0005481680 systemd[1]: var-lib-containers-storage-overlay-eff7f4015c420c7398580ed4d246d002d3dacde3fbc91cfa0bac093e05ef8992-merged.mount: Deactivated successfully.
Oct 12 17:37:00 np0005481680 podman[296711]: 2025-10-12 21:37:00.10360444 +0000 UTC m=+0.763936730 container remove 0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct 12 17:37:00 np0005481680 systemd[1]: libpod-conmon-0c664495ff8545b8e54536d58ff0d1b3eb863fe8c84831c6e3229e1b02b496d0.scope: Deactivated successfully.
Oct 12 17:37:00 np0005481680 lvm[296732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:37:00 np0005481680 lvm[296732]: VG ceph_vg0 finished
Oct 12 17:37:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:37:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:00.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:01 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:37:01 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:37:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:01 np0005481680 nova_compute[264665]: 2025-10-12 21:37:01.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:02] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:37:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:02] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Oct 12 17:37:02 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:37:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:02.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:02 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 12 17:37:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:03 np0005481680 nova_compute[264665]: 2025-10-12 21:37:03.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:03 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:37:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:37:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:37:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 12 17:37:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041445215' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 12 17:37:03 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26713 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:03 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17724 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:04 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17730 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:04 np0005481680 systemd[1]: Starting Time & Date Service...
Oct 12 17:37:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:04.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:04 np0005481680 systemd[1]: Started Time & Date Service.
Oct 12 17:37:04 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:37:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 12 17:37:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475950091' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 12 17:37:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 12 17:37:05 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698523535' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 12 17:37:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:05.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:05 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:06 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26737 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:06 np0005481680 nova_compute[264665]: 2025-10-12 21:37:06.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:07.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:07.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:07 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26755 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:08 np0005481680 nova_compute[264665]: 2025-10-12 21:37:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26761 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:08 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:37:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:08.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:37:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:08.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:08.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:09 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26779 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:09 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26785 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:37:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:11 np0005481680 podman[297362]: 2025-10-12 21:37:11.158754382 +0000 UTC m=+0.115632423 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:37:11 np0005481680 podman[297363]: 2025-10-12 21:37:11.211474304 +0000 UTC m=+0.162990108 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 12 17:37:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:11 np0005481680 nova_compute[264665]: 2025-10-12 21:37:11.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:37:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:37:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:12.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:13 np0005481680 nova_compute[264665]: 2025-10-12 21:37:13.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:14.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:16 np0005481680 nova_compute[264665]: 2025-10-12 21:37:16.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:17.295Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:17.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:18 np0005481680 nova_compute[264665]: 2025-10-12 21:37:18.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:37:18
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'backups']
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:37:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:37:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:37:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:37:18.373 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:37:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:37:18.374 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:37:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:37:18.374 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:18.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:37:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:20.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:21 np0005481680 podman[297419]: 2025-10-12 21:37:21.134203422 +0000 UTC m=+0.087894418 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 12 17:37:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:21 np0005481680 nova_compute[264665]: 2025-10-12 21:37:21.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:22] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Oct 12 17:37:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:22] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Oct 12 17:37:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:22.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:23 np0005481680 nova_compute[264665]: 2025-10-12 21:37:23.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:25.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:26 np0005481680 nova_compute[264665]: 2025-10-12 21:37:26.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:27 np0005481680 podman[297471]: 2025-10-12 21:37:27.113887883 +0000 UTC m=+0.079110055 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 12 17:37:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:27.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:27.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:28 np0005481680 nova_compute[264665]: 2025-10-12 21:37:28.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:28.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:29.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000050s ======
Oct 12 17:37:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:30.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Oct 12 17:37:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:31.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:31 np0005481680 nova_compute[264665]: 2025-10-12 21:37:31.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:32] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Oct 12 17:37:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:32] "GET /metrics HTTP/1.1" 200 48447 "" "Prometheus/2.51.0"
Oct 12 17:37:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:33 np0005481680 nova_compute[264665]: 2025-10-12 21:37:33.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:37:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:37:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:33.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:34 np0005481680 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 12 17:37:34 np0005481680 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 12 17:37:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:35.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:36 np0005481680 nova_compute[264665]: 2025-10-12 21:37:36.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:37.298Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:37.299Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:37:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:37.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:37:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:38 np0005481680 nova_compute[264665]: 2025-10-12 21:37:38.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:37:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:37:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:38.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:40.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:41 np0005481680 nova_compute[264665]: 2025-10-12 21:37:41.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:42] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:37:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:42] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:37:42 np0005481680 podman[297511]: 2025-10-12 21:37:42.175434043 +0000 UTC m=+0.126851819 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:37:42 np0005481680 podman[297512]: 2025-10-12 21:37:42.222408268 +0000 UTC m=+0.170498469 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 12 17:37:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:43 np0005481680 nova_compute[264665]: 2025-10-12 21:37:43.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:44 np0005481680 nova_compute[264665]: 2025-10-12 21:37:44.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.105989) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065106079, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1318, "num_deletes": 250, "total_data_size": 1859796, "memory_usage": 1910976, "flush_reason": "Manual Compaction"}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065115193, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1324699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34204, "largest_seqno": 35521, "table_properties": {"data_size": 1318514, "index_size": 3003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 19476, "raw_average_key_size": 23, "raw_value_size": 1304142, "raw_average_value_size": 1556, "num_data_blocks": 128, "num_entries": 838, "num_filter_entries": 838, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760304989, "oldest_key_time": 1760304989, "file_creation_time": 1760305065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9239 microseconds, and 4230 cpu microseconds.
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.115238) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1324699 bytes OK
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.115256) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.116489) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.116503) EVENT_LOG_v1 {"time_micros": 1760305065116498, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.116525) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1852936, prev total WAL file size 1852936, number of live WAL files 2.
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.117339) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1293KB)], [74(13MB)]
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065117431, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15715844, "oldest_snapshot_seqno": -1}
Oct 12 17:37:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6500 keys, 12107538 bytes, temperature: kUnknown
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065175231, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12107538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12067520, "index_size": 22692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 171002, "raw_average_key_size": 26, "raw_value_size": 11953682, "raw_average_value_size": 1839, "num_data_blocks": 890, "num_entries": 6500, "num_filter_entries": 6500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760305065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.175515) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12107538 bytes
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.176924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 271.5 rd, 209.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(21.0) write-amplify(9.1) OK, records in: 6986, records dropped: 486 output_compression: NoCompression
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.176944) EVENT_LOG_v1 {"time_micros": 1760305065176933, "job": 42, "event": "compaction_finished", "compaction_time_micros": 57888, "compaction_time_cpu_micros": 30266, "output_level": 6, "num_output_files": 1, "total_output_size": 12107538, "num_input_records": 6986, "num_output_records": 6500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065177293, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305065179574, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.117207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.179611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.179616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.179618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.179620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:37:45.179622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:37:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:45 np0005481680 nova_compute[264665]: 2025-10-12 21:37:45.658 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:45 np0005481680 nova_compute[264665]: 2025-10-12 21:37:45.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:46.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:47.300Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:37:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:47.300Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:47.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.665 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.695 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.696 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.697 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.697 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:37:47 np0005481680 nova_compute[264665]: 2025-10-12 21:37:47.698 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:37:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:37:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093667544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.213 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:37:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:37:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.499 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.501 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4372MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.502 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.502 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.588 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.589 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:37:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:48.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:48 np0005481680 nova_compute[264665]: 2025-10-12 21:37:48.615 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:37:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:48.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:48.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:37:49 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:37:49 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473148143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:37:49 np0005481680 nova_compute[264665]: 2025-10-12 21:37:49.095 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:37:49 np0005481680 nova_compute[264665]: 2025-10-12 21:37:49.106 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:37:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:49 np0005481680 nova_compute[264665]: 2025-10-12 21:37:49.127 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:37:49 np0005481680 nova_compute[264665]: 2025-10-12 21:37:49.130 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:37:49 np0005481680 nova_compute[264665]: 2025-10-12 21:37:49.130 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:37:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:50.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:51 np0005481680 podman[297635]: 2025-10-12 21:37:51.36717698 +0000 UTC m=+0.091976482 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 12 17:37:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:37:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:37:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:37:52 np0005481680 nova_compute[264665]: 2025-10-12 21:37:52.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:52 np0005481680 nova_compute[264665]: 2025-10-12 21:37:52.131 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:52 np0005481680 nova_compute[264665]: 2025-10-12 21:37:52.132 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:52.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:53 np0005481680 systemd[1]: session-58.scope: Deactivated successfully.
Oct 12 17:37:53 np0005481680 systemd[1]: session-58.scope: Consumed 2min 56.874s CPU time, 861.2M memory peak, read 374.6M from disk, written 291.4M to disk.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: Session 58 logged out. Waiting for processes to exit.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: Removed session 58.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: New session 59 of user zuul.
Oct 12 17:37:53 np0005481680 nova_compute[264665]: 2025-10-12 21:37:53.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:53 np0005481680 systemd[1]: Started Session 59 of User zuul.
Oct 12 17:37:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:37:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:53.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:37:53 np0005481680 systemd[1]: session-59.scope: Deactivated successfully.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: Session 59 logged out. Waiting for processes to exit.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: Removed session 59.
Oct 12 17:37:53 np0005481680 systemd-logind[783]: New session 60 of user zuul.
Oct 12 17:37:53 np0005481680 systemd[1]: Started Session 60 of User zuul.
Oct 12 17:37:54 np0005481680 systemd[1]: session-60.scope: Deactivated successfully.
Oct 12 17:37:54 np0005481680 systemd-logind[783]: Session 60 logged out. Waiting for processes to exit.
Oct 12 17:37:54 np0005481680 systemd-logind[783]: Removed session 60.
Oct 12 17:37:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:37:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:55.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:55 np0005481680 nova_compute[264665]: 2025-10-12 21:37:55.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:37:55 np0005481680 nova_compute[264665]: 2025-10-12 21:37:55.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:37:55 np0005481680 nova_compute[264665]: 2025-10-12 21:37:55.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:37:55 np0005481680 nova_compute[264665]: 2025-10-12 21:37:55.682 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:37:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:57 np0005481680 nova_compute[264665]: 2025-10-12 21:37:57.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:57.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:58 np0005481680 podman[297723]: 2025-10-12 21:37:58.14150212 +0000 UTC m=+0.099324798 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 12 17:37:58 np0005481680 nova_compute[264665]: 2025-10-12 21:37:58.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:37:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:37:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:37:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:37:58.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:37:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:37:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:37:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:37:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:37:59.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:00.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:02 np0005481680 nova_compute[264665]: 2025-10-12 21:38:02.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:02.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:38:03 np0005481680 nova_compute[264665]: 2025-10-12 21:38:03.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:38:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:03.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:38:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:38:04 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:38:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.265631284 +0000 UTC m=+0.079293398 container create 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:05 np0005481680 systemd[1]: Started libpod-conmon-60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b.scope.
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.23088193 +0000 UTC m=+0.044544094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:05 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.38296234 +0000 UTC m=+0.196624504 container init 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.396243427 +0000 UTC m=+0.209905551 container start 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.40061665 +0000 UTC m=+0.214278804 container attach 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:38:05 np0005481680 bold_jennings[297939]: 167 167
Oct 12 17:38:05 np0005481680 systemd[1]: libpod-60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b.scope: Deactivated successfully.
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.408057819 +0000 UTC m=+0.221719933 container died 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:38:05 np0005481680 systemd[1]: var-lib-containers-storage-overlay-5411e9fcbd8a9ce8c7218fea99b8cef7560d8110f273bc9abd7f6ed390eb9c2a-merged.mount: Deactivated successfully.
Oct 12 17:38:05 np0005481680 podman[297922]: 2025-10-12 21:38:05.465428769 +0000 UTC m=+0.279090883 container remove 60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:38:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:38:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:05 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:38:05 np0005481680 systemd[1]: libpod-conmon-60e2d01aefce0398bdb42f329fb9e219b210aee27ed134ecb8c5e1fd3088b06b.scope: Deactivated successfully.
Oct 12 17:38:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 12 17:38:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 12 17:38:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 12 17:38:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 12 17:38:05 np0005481680 radosgw[95273]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 12 17:38:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:05 np0005481680 podman[297963]: 2025-10-12 21:38:05.723363892 +0000 UTC m=+0.061178718 container create 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:38:05 np0005481680 podman[297963]: 2025-10-12 21:38:05.693322028 +0000 UTC m=+0.031136924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:05 np0005481680 systemd[1]: Started libpod-conmon-456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378.scope.
Oct 12 17:38:05 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:05 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:05 np0005481680 podman[297963]: 2025-10-12 21:38:05.852807626 +0000 UTC m=+0.190622472 container init 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 12 17:38:05 np0005481680 podman[297963]: 2025-10-12 21:38:05.864440722 +0000 UTC m=+0.202255568 container start 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:38:05 np0005481680 podman[297963]: 2025-10-12 21:38:05.869150971 +0000 UTC m=+0.206965817 container attach 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:38:06 np0005481680 mystifying_beaver[297980]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:38:06 np0005481680 mystifying_beaver[297980]: --> All data devices are unavailable
Oct 12 17:38:06 np0005481680 systemd[1]: libpod-456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378.scope: Deactivated successfully.
Oct 12 17:38:06 np0005481680 podman[297963]: 2025-10-12 21:38:06.312706318 +0000 UTC m=+0.650521164 container died 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 12 17:38:06 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fde6f5026bd258a9f57b614202d7a8c4fcb474c53dee89b5a5e037983a92a8f6-merged.mount: Deactivated successfully.
Oct 12 17:38:06 np0005481680 podman[297963]: 2025-10-12 21:38:06.373416333 +0000 UTC m=+0.711231179 container remove 456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:38:06 np0005481680 systemd[1]: libpod-conmon-456aa2c375814ae4be313b512a3dcca64dd00d0dca851c128aab3ef9b8d30378.scope: Deactivated successfully.
Oct 12 17:38:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:06.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:07 np0005481680 nova_compute[264665]: 2025-10-12 21:38:07.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.190555076 +0000 UTC m=+0.081356321 container create 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.158192723 +0000 UTC m=+0.048994018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:07 np0005481680 systemd[1]: Started libpod-conmon-6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7.scope.
Oct 12 17:38:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:07.303Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.314342006 +0000 UTC m=+0.205143261 container init 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.325956882 +0000 UTC m=+0.216758127 container start 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.329958634 +0000 UTC m=+0.220759899 container attach 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:38:07 np0005481680 modest_ritchie[298142]: 167 167
Oct 12 17:38:07 np0005481680 systemd[1]: libpod-6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7.scope: Deactivated successfully.
Oct 12 17:38:07 np0005481680 conmon[298142]: conmon 6540155a6ccd63100486 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7.scope/container/memory.events
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.335911005 +0000 UTC m=+0.226712250 container died 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:38:07 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9366fc944eb1edb8eccc0cf97d701108cf083d3a67228156366e17ca6cac6ddc-merged.mount: Deactivated successfully.
Oct 12 17:38:07 np0005481680 podman[298124]: 2025-10-12 21:38:07.394754303 +0000 UTC m=+0.285555558 container remove 6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:38:07 np0005481680 systemd[1]: libpod-conmon-6540155a6ccd63100486a0bf3508ead83a2b940bfe2030e91c10f574b8194fb7.scope: Deactivated successfully.
Oct 12 17:38:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:07 np0005481680 podman[298167]: 2025-10-12 21:38:07.667158214 +0000 UTC m=+0.072785473 container create 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 12 17:38:07 np0005481680 systemd[1]: Started libpod-conmon-691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47.scope.
Oct 12 17:38:07 np0005481680 podman[298167]: 2025-10-12 21:38:07.635465128 +0000 UTC m=+0.041092447 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:07 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29971f1d06a6004bb5c7a5ff955a8c97afd4efd3b3e4118702c84d26d868a63f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29971f1d06a6004bb5c7a5ff955a8c97afd4efd3b3e4118702c84d26d868a63f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29971f1d06a6004bb5c7a5ff955a8c97afd4efd3b3e4118702c84d26d868a63f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:07 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29971f1d06a6004bb5c7a5ff955a8c97afd4efd3b3e4118702c84d26d868a63f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:07 np0005481680 podman[298167]: 2025-10-12 21:38:07.800740404 +0000 UTC m=+0.206367703 container init 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 12 17:38:07 np0005481680 podman[298167]: 2025-10-12 21:38:07.814877223 +0000 UTC m=+0.220504472 container start 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:38:07 np0005481680 podman[298167]: 2025-10-12 21:38:07.819553742 +0000 UTC m=+0.225181051 container attach 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]: {
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:    "0": [
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:        {
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "devices": [
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "/dev/loop3"
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            ],
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "lv_name": "ceph_lv0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "lv_size": "21470642176",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "name": "ceph_lv0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "tags": {
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.cluster_name": "ceph",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.crush_device_class": "",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.encrypted": "0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.osd_id": "0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.type": "block",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.vdo": "0",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:                "ceph.with_tpm": "0"
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            },
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "type": "block",
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:            "vg_name": "ceph_vg0"
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:        }
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]:    ]
Oct 12 17:38:08 np0005481680 affectionate_moore[298185]: }
Oct 12 17:38:08 np0005481680 systemd[1]: libpod-691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47.scope: Deactivated successfully.
Oct 12 17:38:08 np0005481680 podman[298167]: 2025-10-12 21:38:08.162512689 +0000 UTC m=+0.568139948 container died 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:38:08 np0005481680 systemd[1]: var-lib-containers-storage-overlay-29971f1d06a6004bb5c7a5ff955a8c97afd4efd3b3e4118702c84d26d868a63f-merged.mount: Deactivated successfully.
Oct 12 17:38:08 np0005481680 podman[298167]: 2025-10-12 21:38:08.228810006 +0000 UTC m=+0.634437266 container remove 691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 12 17:38:08 np0005481680 systemd[1]: libpod-conmon-691e0212cbbc3b823466981e48c5f97d7b5587a53d47abf0d041592b61696a47.scope: Deactivated successfully.
Oct 12 17:38:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:08 np0005481680 nova_compute[264665]: 2025-10-12 21:38:08.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:08.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:08.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.099236315 +0000 UTC m=+0.070944275 container create cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 12 17:38:09 np0005481680 systemd[1]: Started libpod-conmon-cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc.scope.
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.071949491 +0000 UTC m=+0.043657531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.201364004 +0000 UTC m=+0.173072034 container init cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.217001372 +0000 UTC m=+0.188709332 container start cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.22123489 +0000 UTC m=+0.192942920 container attach cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:38:09 np0005481680 hardcore_elion[298318]: 167 167
Oct 12 17:38:09 np0005481680 systemd[1]: libpod-cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc.scope: Deactivated successfully.
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.229717355 +0000 UTC m=+0.201425335 container died cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:38:09 np0005481680 systemd[1]: var-lib-containers-storage-overlay-4be983d8e3e87dc38429e175bb88086ead342f3645ef4a7e6a405d893bde20ab-merged.mount: Deactivated successfully.
Oct 12 17:38:09 np0005481680 podman[298302]: 2025-10-12 21:38:09.286800408 +0000 UTC m=+0.258508388 container remove cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:38:09 np0005481680 systemd[1]: libpod-conmon-cc89fef40b9bc53d6f64e5836f332f839d2f1999b8c4f53d07611fdf54c93cdc.scope: Deactivated successfully.
Oct 12 17:38:09 np0005481680 podman[298345]: 2025-10-12 21:38:09.540990557 +0000 UTC m=+0.077745740 container create 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 17:38:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:38:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:09.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:38:09 np0005481680 systemd[1]: Started libpod-conmon-7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b.scope.
Oct 12 17:38:09 np0005481680 podman[298345]: 2025-10-12 21:38:09.509344011 +0000 UTC m=+0.046099244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:38:09 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:38:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06864baca59f2bbfdb32458c13da0f5a06b288991439fb4761b8a6a8eb39880e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06864baca59f2bbfdb32458c13da0f5a06b288991439fb4761b8a6a8eb39880e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06864baca59f2bbfdb32458c13da0f5a06b288991439fb4761b8a6a8eb39880e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:09 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06864baca59f2bbfdb32458c13da0f5a06b288991439fb4761b8a6a8eb39880e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:38:09 np0005481680 podman[298345]: 2025-10-12 21:38:09.662401765 +0000 UTC m=+0.199157008 container init 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:09 np0005481680 podman[298345]: 2025-10-12 21:38:09.678153417 +0000 UTC m=+0.214908610 container start 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 12 17:38:09 np0005481680 podman[298345]: 2025-10-12 21:38:09.682972979 +0000 UTC m=+0.219728222 container attach 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:38:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 0 B/s wr, 185 op/s
Oct 12 17:38:10 np0005481680 lvm[298439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:38:10 np0005481680 lvm[298439]: VG ceph_vg0 finished
Oct 12 17:38:10 np0005481680 ecstatic_keldysh[298362]: {}
Oct 12 17:38:10 np0005481680 systemd[1]: libpod-7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b.scope: Deactivated successfully.
Oct 12 17:38:10 np0005481680 systemd[1]: libpod-7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b.scope: Consumed 1.754s CPU time.
Oct 12 17:38:10 np0005481680 podman[298345]: 2025-10-12 21:38:10.576109986 +0000 UTC m=+1.112865179 container died 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:10 np0005481680 systemd[1]: var-lib-containers-storage-overlay-06864baca59f2bbfdb32458c13da0f5a06b288991439fb4761b8a6a8eb39880e-merged.mount: Deactivated successfully.
Oct 12 17:38:10 np0005481680 podman[298345]: 2025-10-12 21:38:10.636652337 +0000 UTC m=+1.173407530 container remove 7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:38:10 np0005481680 systemd[1]: libpod-conmon-7adf800cb23cd63758808543c60762d110d460396bef6c1e1c1e1d01870a3d2b.scope: Deactivated successfully.
Oct 12 17:38:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:38:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:38:10 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:11 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:11 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:38:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:38:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:38:12 np0005481680 nova_compute[264665]: 2025-10-12 21:38:12.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 0 B/s wr, 185 op/s
Oct 12 17:38:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:12.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:13 np0005481680 podman[298480]: 2025-10-12 21:38:13.153266496 +0000 UTC m=+0.107022145 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:38:13 np0005481680 podman[298481]: 2025-10-12 21:38:13.233174999 +0000 UTC m=+0.186949789 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:13 np0005481680 nova_compute[264665]: 2025-10-12 21:38:13.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:13.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 0 B/s wr, 185 op/s
Oct 12 17:38:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:14.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 12 17:38:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:17 np0005481680 nova_compute[264665]: 2025-10-12 21:38:17.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:17.304Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:38:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:17.304Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:38:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:17.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:38:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:38:18
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:38:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:38:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:38:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:38:18.375 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:38:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:38:18.376 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:38:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:38:18.376 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:38:18 np0005481680 nova_compute[264665]: 2025-10-12 21:38:18.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:18.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:38:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:38:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 12 17:38:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:22 np0005481680 nova_compute[264665]: 2025-10-12 21:38:22.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:22 np0005481680 podman[298537]: 2025-10-12 21:38:22.131579301 +0000 UTC m=+0.092675249 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:38:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:22.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:23 np0005481680 nova_compute[264665]: 2025-10-12 21:38:23.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:24.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:26.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:27 np0005481680 nova_compute[264665]: 2025-10-12 21:38:27.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:27.305Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:38:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:27.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:27.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:28 np0005481680 nova_compute[264665]: 2025-10-12 21:38:28.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:28 np0005481680 podman[298589]: 2025-10-12 21:38:28.592526299 +0000 UTC m=+0.090997097 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 12 17:38:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:28.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:38:32 np0005481680 nova_compute[264665]: 2025-10-12 21:38:32.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:38:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:38:33 np0005481680 nova_compute[264665]: 2025-10-12 21:38:33.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:36.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:37 np0005481680 nova_compute[264665]: 2025-10-12 21:38:37.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:37.307Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:38 np0005481680 nova_compute[264665]: 2025-10-12 21:38:38.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:38.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:38:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:38.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:38:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:39.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:40.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:41.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:42] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:38:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:42] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:38:42 np0005481680 nova_compute[264665]: 2025-10-12 21:38:42.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:43 np0005481680 nova_compute[264665]: 2025-10-12 21:38:43.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:44 np0005481680 podman[298624]: 2025-10-12 21:38:44.128782301 +0000 UTC m=+0.089140479 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid)
Oct 12 17:38:44 np0005481680 podman[298625]: 2025-10-12 21:38:44.158499647 +0000 UTC m=+0.117694486 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 12 17:38:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:44 np0005481680 nova_compute[264665]: 2025-10-12 21:38:44.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:45 np0005481680 nova_compute[264665]: 2025-10-12 21:38:45.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:45 np0005481680 nova_compute[264665]: 2025-10-12 21:38:45.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:45 np0005481680 nova_compute[264665]: 2025-10-12 21:38:45.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:45 np0005481680 nova_compute[264665]: 2025-10-12 21:38:45.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 12 17:38:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:47 np0005481680 nova_compute[264665]: 2025-10-12 21:38:47.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:47.307Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:47.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:47 np0005481680 nova_compute[264665]: 2025-10-12 21:38:47.679 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:38:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:38:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:38:48 np0005481680 nova_compute[264665]: 2025-10-12 21:38:48.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:48 np0005481680 nova_compute[264665]: 2025-10-12 21:38:48.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:48 np0005481680 nova_compute[264665]: 2025-10-12 21:38:48.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:38:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:38:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:48.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:38:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:48.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:38:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:48.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:38:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:49.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.701 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.702 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:38:49 np0005481680 nova_compute[264665]: 2025-10-12 21:38:49.702 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:38:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:38:50 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524694318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.209 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:38:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.466 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.467 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.467 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.467 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.532 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.533 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:38:50 np0005481680 nova_compute[264665]: 2025-10-12 21:38:50.597 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:38:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.003000074s ======
Oct 12 17:38:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Oct 12 17:38:51 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:38:51 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225673908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:38:51 np0005481680 nova_compute[264665]: 2025-10-12 21:38:51.055 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:38:51 np0005481680 nova_compute[264665]: 2025-10-12 21:38:51.060 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:38:51 np0005481680 nova_compute[264665]: 2025-10-12 21:38:51.081 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:38:51 np0005481680 nova_compute[264665]: 2025-10-12 21:38:51.083 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:38:51 np0005481680 nova_compute[264665]: 2025-10-12 21:38:51.083 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:38:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:51.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:52] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:38:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:38:52] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:38:52 np0005481680 nova_compute[264665]: 2025-10-12 21:38:52.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:52 np0005481680 nova_compute[264665]: 2025-10-12 21:38:52.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:52 np0005481680 nova_compute[264665]: 2025-10-12 21:38:52.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:52 np0005481680 nova_compute[264665]: 2025-10-12 21:38:52.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:52.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:53 np0005481680 podman[298746]: 2025-10-12 21:38:53.157866927 +0000 UTC m=+0.119308166 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 12 17:38:53 np0005481680 nova_compute[264665]: 2025-10-12 21:38:53.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:53.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:54.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:38:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:55.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:56 np0005481680 nova_compute[264665]: 2025-10-12 21:38:56.685 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:38:56 np0005481680 nova_compute[264665]: 2025-10-12 21:38:56.685 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:38:56 np0005481680 nova_compute[264665]: 2025-10-12 21:38:56.686 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:38:56 np0005481680 nova_compute[264665]: 2025-10-12 21:38:56.740 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:38:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:57 np0005481680 nova_compute[264665]: 2025-10-12 21:38:57.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:57.309Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:38:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:38:58 np0005481680 nova_compute[264665]: 2025-10-12 21:38:58.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:38:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:38:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:38:58.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:38:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:58.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:38:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:58.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:38:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:38:58.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:38:59 np0005481680 podman[298772]: 2025-10-12 21:38:59.125194014 +0000 UTC m=+0.086080311 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:38:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:38:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:38:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:38:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:00.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:01.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:01 np0005481680 nova_compute[264665]: 2025-10-12 21:39:01.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:01 np0005481680 nova_compute[264665]: 2025-10-12 21:39:01.682 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:01 np0005481680 nova_compute[264665]: 2025-10-12 21:39:01.682 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 12 17:39:01 np0005481680 nova_compute[264665]: 2025-10-12 21:39:01.694 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 12 17:39:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:02] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:39:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:02] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:39:02 np0005481680 nova_compute[264665]: 2025-10-12 21:39:02.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:02.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:39:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:39:03 np0005481680 nova_compute[264665]: 2025-10-12 21:39:03.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:04.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:06.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:07 np0005481680 nova_compute[264665]: 2025-10-12 21:39:07.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:07.309Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:08 np0005481680 nova_compute[264665]: 2025-10-12 21:39:08.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:08.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:08.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:09.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:09 np0005481680 nova_compute[264665]: 2025-10-12 21:39:09.707 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:10.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:11.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:39:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:12] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:39:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:39:12 np0005481680 nova_compute[264665]: 2025-10-12 21:39:12.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:12 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:39:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:12.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.864938801 +0000 UTC m=+0.084428139 container create ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 12 17:39:12 np0005481680 systemd[1]: Started libpod-conmon-ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a.scope.
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.830048934 +0000 UTC m=+0.049538302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:12 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.968026224 +0000 UTC m=+0.187515592 container init ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.979414594 +0000 UTC m=+0.198903932 container start ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.98432694 +0000 UTC m=+0.203816328 container attach ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:39:12 np0005481680 busy_matsumoto[299018]: 167 167
Oct 12 17:39:12 np0005481680 systemd[1]: libpod-ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a.scope: Deactivated successfully.
Oct 12 17:39:12 np0005481680 conmon[299018]: conmon ac672a04b1d222c354a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a.scope/container/memory.events
Oct 12 17:39:12 np0005481680 podman[299002]: 2025-10-12 21:39:12.990714071 +0000 UTC m=+0.210203399 container died ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:39:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-0e14e0bf94393e1b2b4f38f0bb305a801e5374ba3d9a5ddef8fc687b59317e11-merged.mount: Deactivated successfully.
Oct 12 17:39:13 np0005481680 podman[299002]: 2025-10-12 21:39:13.047406735 +0000 UTC m=+0.266896073 container remove ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:39:13 np0005481680 systemd[1]: libpod-conmon-ac672a04b1d222c354a64504ee07f0468e740f413fcd218f28349e5fba623e5a.scope: Deactivated successfully.
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.290318466 +0000 UTC m=+0.065428555 container create 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:39:13 np0005481680 systemd[1]: Started libpod-conmon-6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378.scope.
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.265296589 +0000 UTC m=+0.040406708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:13 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:13 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.408014381 +0000 UTC m=+0.183124480 container init 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.42013556 +0000 UTC m=+0.195245659 container start 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.424627214 +0000 UTC m=+0.199737343 container attach 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:39:13 np0005481680 nova_compute[264665]: 2025-10-12 21:39:13.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:13.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:13 np0005481680 loving_margulis[299061]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:39:13 np0005481680 loving_margulis[299061]: --> All data devices are unavailable
Oct 12 17:39:13 np0005481680 systemd[1]: libpod-6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378.scope: Deactivated successfully.
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.852771479 +0000 UTC m=+0.627881588 container died 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:39:13 np0005481680 systemd[1]: var-lib-containers-storage-overlay-c90c265fd5d5c707ef284b7b71e9d75e29e2652a13b9963e1a39b184e8a5d4fb-merged.mount: Deactivated successfully.
Oct 12 17:39:13 np0005481680 podman[299043]: 2025-10-12 21:39:13.916051329 +0000 UTC m=+0.691161418 container remove 6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_margulis, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:39:13 np0005481680 systemd[1]: libpod-conmon-6f513e71be92c7d92ed3ff92f2cab9c7e5396b5cbc9c938ecc9f20fc658a2378.scope: Deactivated successfully.
Oct 12 17:39:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:14 np0005481680 podman[299139]: 2025-10-12 21:39:14.260231737 +0000 UTC m=+0.067438177 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 12 17:39:14 np0005481680 podman[299140]: 2025-10-12 21:39:14.354589338 +0000 UTC m=+0.150303516 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.707319544 +0000 UTC m=+0.048528476 container create 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:39:14 np0005481680 systemd[1]: Started libpod-conmon-2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8.scope.
Oct 12 17:39:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:14.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.68636527 +0000 UTC m=+0.027574172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:14 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.805550693 +0000 UTC m=+0.146759645 container init 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.816120102 +0000 UTC m=+0.157329004 container start 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.820111754 +0000 UTC m=+0.161320726 container attach 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:39:14 np0005481680 sleepy_mclaren[299243]: 167 167
Oct 12 17:39:14 np0005481680 systemd[1]: libpod-2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8.scope: Deactivated successfully.
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.825307967 +0000 UTC m=+0.166516879 container died 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:39:14 np0005481680 systemd[1]: var-lib-containers-storage-overlay-046014164f23283c04fd99c806cc9a7f81ca599be1278e3825e6905380f47321-merged.mount: Deactivated successfully.
Oct 12 17:39:14 np0005481680 podman[299227]: 2025-10-12 21:39:14.885175609 +0000 UTC m=+0.226384521 container remove 2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_mclaren, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:39:14 np0005481680 systemd[1]: libpod-conmon-2c9f80fecd394ecd750efad1c9a6c8a3ce027d337b3d21b486339a1a35043cc8.scope: Deactivated successfully.
Oct 12 17:39:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.136131786 +0000 UTC m=+0.066584086 container create cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 12 17:39:15 np0005481680 systemd[1]: Started libpod-conmon-cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9.scope.
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.108533443 +0000 UTC m=+0.038985793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:15 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e64a1f327c6639abc559960c6e8313580035c90fa0c64814f6337c79ccc5933/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e64a1f327c6639abc559960c6e8313580035c90fa0c64814f6337c79ccc5933/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e64a1f327c6639abc559960c6e8313580035c90fa0c64814f6337c79ccc5933/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:15 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e64a1f327c6639abc559960c6e8313580035c90fa0c64814f6337c79ccc5933/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.239302521 +0000 UTC m=+0.169754791 container init cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.253169834 +0000 UTC m=+0.183622114 container start cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.257250097 +0000 UTC m=+0.187702377 container attach cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]: {
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:    "0": [
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:        {
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "devices": [
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "/dev/loop3"
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            ],
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "lv_name": "ceph_lv0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "lv_size": "21470642176",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "name": "ceph_lv0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "tags": {
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.cluster_name": "ceph",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.crush_device_class": "",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.encrypted": "0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.osd_id": "0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.type": "block",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.vdo": "0",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:                "ceph.with_tpm": "0"
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            },
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "type": "block",
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:            "vg_name": "ceph_vg0"
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:        }
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]:    ]
Oct 12 17:39:15 np0005481680 xenodochial_leavitt[299286]: }
Oct 12 17:39:15 np0005481680 systemd[1]: libpod-cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9.scope: Deactivated successfully.
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.586773802 +0000 UTC m=+0.517226102 container died cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:39:15 np0005481680 systemd[1]: var-lib-containers-storage-overlay-6e64a1f327c6639abc559960c6e8313580035c90fa0c64814f6337c79ccc5933-merged.mount: Deactivated successfully.
Oct 12 17:39:15 np0005481680 podman[299269]: 2025-10-12 21:39:15.650385462 +0000 UTC m=+0.580837772 container remove cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_leavitt, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 12 17:39:15 np0005481680 systemd[1]: libpod-conmon-cdc44c914ef6daba9ea99caa15b8692cb3d405ea4e5ae53c1e4e6ec1ccd864d9.scope: Deactivated successfully.
Oct 12 17:39:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:15.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.394999279 +0000 UTC m=+0.069154581 container create a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:39:16 np0005481680 systemd[1]: Started libpod-conmon-a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714.scope.
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.364978325 +0000 UTC m=+0.039133677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.495222329 +0000 UTC m=+0.169377671 container init a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.505429589 +0000 UTC m=+0.179584901 container start a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.5101969 +0000 UTC m=+0.184352212 container attach a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 12 17:39:16 np0005481680 serene_heyrovsky[299422]: 167 167
Oct 12 17:39:16 np0005481680 systemd[1]: libpod-a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714.scope: Deactivated successfully.
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.514540881 +0000 UTC m=+0.188696183 container died a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:39:16 np0005481680 systemd[1]: var-lib-containers-storage-overlay-56a586158439391dca44b2395f1b1076752c0a1de63dfd1114bd23e809f7f4c4-merged.mount: Deactivated successfully.
Oct 12 17:39:16 np0005481680 podman[299405]: 2025-10-12 21:39:16.566990266 +0000 UTC m=+0.241145578 container remove a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_heyrovsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:39:16 np0005481680 systemd[1]: libpod-conmon-a3cc250eb99be077eb2629b21e76a6bbbe4174cc5501fc960267667e24816714.scope: Deactivated successfully.
Oct 12 17:39:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:16.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:16 np0005481680 podman[299446]: 2025-10-12 21:39:16.832372378 +0000 UTC m=+0.078062987 container create d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 12 17:39:16 np0005481680 systemd[1]: Started libpod-conmon-d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4.scope.
Oct 12 17:39:16 np0005481680 podman[299446]: 2025-10-12 21:39:16.805517185 +0000 UTC m=+0.051207844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:39:16 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:39:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9736928c76a1d55d290e91ebfda1b4c342a7392f64efee946258835b9c80e86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9736928c76a1d55d290e91ebfda1b4c342a7392f64efee946258835b9c80e86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9736928c76a1d55d290e91ebfda1b4c342a7392f64efee946258835b9c80e86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:16 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9736928c76a1d55d290e91ebfda1b4c342a7392f64efee946258835b9c80e86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:39:16 np0005481680 podman[299446]: 2025-10-12 21:39:16.933303357 +0000 UTC m=+0.178993986 container init d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:39:16 np0005481680 podman[299446]: 2025-10-12 21:39:16.951025238 +0000 UTC m=+0.196715847 container start d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:39:16 np0005481680 podman[299446]: 2025-10-12 21:39:16.956010214 +0000 UTC m=+0.201700803 container attach d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:39:17 np0005481680 nova_compute[264665]: 2025-10-12 21:39:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:17.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:17.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:17 np0005481680 lvm[299538]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:39:17 np0005481680 lvm[299538]: VG ceph_vg0 finished
Oct 12 17:39:17 np0005481680 laughing_jemison[299462]: {}
Oct 12 17:39:17 np0005481680 systemd[1]: libpod-d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4.scope: Deactivated successfully.
Oct 12 17:39:17 np0005481680 podman[299446]: 2025-10-12 21:39:17.776892983 +0000 UTC m=+1.022583582 container died d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 12 17:39:17 np0005481680 systemd[1]: libpod-d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4.scope: Consumed 1.426s CPU time.
Oct 12 17:39:17 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d9736928c76a1d55d290e91ebfda1b4c342a7392f64efee946258835b9c80e86-merged.mount: Deactivated successfully.
Oct 12 17:39:17 np0005481680 podman[299446]: 2025-10-12 21:39:17.837945146 +0000 UTC m=+1.083635745 container remove d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:39:17 np0005481680 systemd[1]: libpod-conmon-d80228886b95aae70e3f2ad8eae0b1d6e85e418ffcd9a50f24e92b0eb1e904f4.scope: Deactivated successfully.
Oct 12 17:39:17 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:39:17 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:17 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:39:17 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:39:18
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'backups', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:39:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:39:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:39:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:39:18.377 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:39:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:39:18.378 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:39:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:39:18.378 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:18 np0005481680 nova_compute[264665]: 2025-10-12 21:39:18.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:18.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:18.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:39:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:18.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:39:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:18 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:39:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:39:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:19.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:20 np0005481680 ceph-mgr[73901]: [devicehealth INFO root] Check health
Oct 12 17:39:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:21.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:22 np0005481680 nova_compute[264665]: 2025-10-12 21:39:22.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:22.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:23 np0005481680 nova_compute[264665]: 2025-10-12 21:39:23.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:24 np0005481680 podman[299586]: 2025-10-12 21:39:24.157399714 +0000 UTC m=+0.109101738 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:39:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:26.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:27 np0005481680 nova_compute[264665]: 2025-10-12 21:39:27.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:27.312Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:27.312Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:27.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:28 np0005481680 nova_compute[264665]: 2025-10-12 21:39:28.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:28.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:28.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:30 np0005481680 podman[299638]: 2025-10-12 21:39:30.131998325 +0000 UTC m=+0.092065714 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 12 17:39:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:39:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:30.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:39:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:32 np0005481680 nova_compute[264665]: 2025-10-12 21:39:32.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:32.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:39:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:39:33 np0005481680 nova_compute[264665]: 2025-10-12 21:39:33.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:33.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:34.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:37 np0005481680 nova_compute[264665]: 2025-10-12 21:39:37.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:37.314Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:37.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:38 np0005481680 nova_compute[264665]: 2025-10-12 21:39:38.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:38.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=cleanup t=2025-10-12T21:39:39.146458041Z level=info msg="Completed cleanup jobs" duration=17.703151ms
Oct 12 17:39:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=plugins.update.checker t=2025-10-12T21:39:39.292378204Z level=info msg="Update check succeeded" duration=63.586147ms
Oct 12 17:39:39 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-grafana-compute-0[104099]: logger=grafana.update.checker t=2025-10-12T21:39:39.30637198Z level=info msg="Update check succeeded" duration=53.910062ms
Oct 12 17:39:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:42] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Oct 12 17:39:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:42] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Oct 12 17:39:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:42 np0005481680 nova_compute[264665]: 2025-10-12 21:39:42.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:42.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:43 np0005481680 nova_compute[264665]: 2025-10-12 21:39:43.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:43.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:44.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:45 np0005481680 podman[299672]: 2025-10-12 21:39:45.12139642 +0000 UTC m=+0.072426603 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 12 17:39:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:45 np0005481680 podman[299673]: 2025-10-12 21:39:45.153824897 +0000 UTC m=+0.107054716 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:39:45 np0005481680 nova_compute[264665]: 2025-10-12 21:39:45.680 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:45.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:46 np0005481680 nova_compute[264665]: 2025-10-12 21:39:46.658 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:46 np0005481680 nova_compute[264665]: 2025-10-12 21:39:46.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:47 np0005481680 nova_compute[264665]: 2025-10-12 21:39:47.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:47.315Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:47.315Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:47 np0005481680 nova_compute[264665]: 2025-10-12 21:39:47.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:47.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:39:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217319654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:39:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217319654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:39:48 np0005481680 nova_compute[264665]: 2025-10-12 21:39:48.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:48 np0005481680 nova_compute[264665]: 2025-10-12 21:39:48.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:48 np0005481680 nova_compute[264665]: 2025-10-12 21:39:48.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:39:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:48.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.686 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.687 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.687 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.687 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:39:51 np0005481680 nova_compute[264665]: 2025-10-12 21:39:51.688 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:39:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:39:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:39:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132710534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.203 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.454 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.456 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4530MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.456 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.457 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.505728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192505767, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1368, "num_deletes": 251, "total_data_size": 2537444, "memory_usage": 2579824, "flush_reason": "Manual Compaction"}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192523957, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2473148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35522, "largest_seqno": 36889, "table_properties": {"data_size": 2466718, "index_size": 3629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13745, "raw_average_key_size": 20, "raw_value_size": 2453762, "raw_average_value_size": 3597, "num_data_blocks": 157, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760305066, "oldest_key_time": 1760305066, "file_creation_time": 1760305192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 18298 microseconds, and 10542 cpu microseconds.
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.524022) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2473148 bytes OK
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.524047) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.525852) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.525877) EVENT_LOG_v1 {"time_micros": 1760305192525869, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.525900) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2531517, prev total WAL file size 2531517, number of live WAL files 2.
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.527419) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2415KB)], [77(11MB)]
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192527472, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14580686, "oldest_snapshot_seqno": -1}
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.576 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.576 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.595 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing inventories for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6664 keys, 12465883 bytes, temperature: kUnknown
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192600296, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12465883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12424417, "index_size": 23690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175127, "raw_average_key_size": 26, "raw_value_size": 12307306, "raw_average_value_size": 1846, "num_data_blocks": 929, "num_entries": 6664, "num_filter_entries": 6664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760305192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.600641) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12465883 bytes
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.602224) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.0 rd, 171.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 11.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(10.9) write-amplify(5.0) OK, records in: 7182, records dropped: 518 output_compression: NoCompression
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.602255) EVENT_LOG_v1 {"time_micros": 1760305192602241, "job": 44, "event": "compaction_finished", "compaction_time_micros": 72905, "compaction_time_cpu_micros": 48699, "output_level": 6, "num_output_files": 1, "total_output_size": 12465883, "num_input_records": 7182, "num_output_records": 6664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192603338, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305192608166, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.527365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.608246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.608253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.608256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.608259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:39:52.608261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:39:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.947 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating ProviderTree inventory for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.947 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Updating inventory in ProviderTree for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.966 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing aggregate associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 12 17:39:52 np0005481680 nova_compute[264665]: 2025-10-12 21:39:52.986 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Refreshing trait associations for resource provider d63acd5d-c9c0-44fc-813b-0eadb368ddab, traits: COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AVX2,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.002 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:39:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:39:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473738052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.491 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.500 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.518 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.521 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.521 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:39:53 np0005481680 nova_compute[264665]: 2025-10-12 21:39:53.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:54.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:55 np0005481680 podman[299797]: 2025-10-12 21:39:55.129807619 +0000 UTC m=+0.082108181 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 12 17:39:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:39:55 np0005481680 nova_compute[264665]: 2025-10-12 21:39:55.523 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:55 np0005481680 nova_compute[264665]: 2025-10-12 21:39:55.524 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:56 np0005481680 nova_compute[264665]: 2025-10-12 21:39:56.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:39:56 np0005481680 nova_compute[264665]: 2025-10-12 21:39:56.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:39:56 np0005481680 nova_compute[264665]: 2025-10-12 21:39:56.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:39:56 np0005481680 nova_compute[264665]: 2025-10-12 21:39:56.684 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:39:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:57 np0005481680 nova_compute[264665]: 2025-10-12 21:39:57.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:57.316Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:39:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:39:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:39:58 np0005481680 nova_compute[264665]: 2025-10-12 21:39:58.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:39:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:39:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:39:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:39:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:58.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:39:58.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:39:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:39:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:39:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:39:59.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 failed cephadm daemon(s)
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.hypubd on compute-0 is in error state
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.mxbywc on compute-1 is in error state
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.wptquy on compute-2 is in error state
Oct 12 17:40:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: Health detail: HEALTH_WARN 3 failed cephadm daemon(s)
Oct 12 17:40:00 np0005481680 ceph-mon[73608]: [WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
Oct 12 17:40:00 np0005481680 ceph-mon[73608]:    daemon nfs.cephfs.2.0.compute-0.hypubd on compute-0 is in error state
Oct 12 17:40:00 np0005481680 ceph-mon[73608]:    daemon nfs.cephfs.0.0.compute-1.mxbywc on compute-1 is in error state
Oct 12 17:40:00 np0005481680 ceph-mon[73608]:    daemon nfs.cephfs.1.0.compute-2.wptquy on compute-2 is in error state
Oct 12 17:40:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:00.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:01 np0005481680 podman[299824]: 2025-10-12 21:40:01.127834585 +0000 UTC m=+0.079930715 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:40:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:02 np0005481680 nova_compute[264665]: 2025-10-12 21:40:02.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:02.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:40:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:40:03 np0005481680 nova_compute[264665]: 2025-10-12 21:40:03.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:03.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:04.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:06.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:07 np0005481680 nova_compute[264665]: 2025-10-12 21:40:07.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:07.317Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:40:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:07.318Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:40:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:07.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:08 np0005481680 nova_compute[264665]: 2025-10-12 21:40:08.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:08.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:08.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:09.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:10.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:11.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:12] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:40:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:12] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:40:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:12 np0005481680 nova_compute[264665]: 2025-10-12 21:40:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:12.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:13 np0005481680 nova_compute[264665]: 2025-10-12 21:40:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:16 np0005481680 podman[299884]: 2025-10-12 21:40:16.132565861 +0000 UTC m=+0.092478294 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 12 17:40:16 np0005481680 podman[299885]: 2025-10-12 21:40:16.172159178 +0000 UTC m=+0.127129095 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 12 17:40:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:17 np0005481680 nova_compute[264665]: 2025-10-12 21:40:17.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:17.319Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:40:18
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr']
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:40:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:40:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:40:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:18.379 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:40:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:18.379 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:40:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:18.379 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:18 np0005481680 nova_compute[264665]: 2025-10-12 21:40:18.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:18.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:18.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:40:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:18.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:40:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:40:19 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:40:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:19.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.058308317 +0000 UTC m=+0.079923105 container create 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:40:20 np0005481680 systemd[1]: Started libpod-conmon-253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564.scope.
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.023906141 +0000 UTC m=+0.045520979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.167346962 +0000 UTC m=+0.188961790 container init 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.18103788 +0000 UTC m=+0.202652648 container start 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 12 17:40:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:40:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:20 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.185564315 +0000 UTC m=+0.207179093 container attach 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:40:20 np0005481680 happy_perlman[300123]: 167 167
Oct 12 17:40:20 np0005481680 systemd[1]: libpod-253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564.scope: Deactivated successfully.
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.189013233 +0000 UTC m=+0.210628031 container died 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 12 17:40:20 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d765885a27e2cf3f6a7d3991a84934d694b536235afc4808cf0b2875a3209687-merged.mount: Deactivated successfully.
Oct 12 17:40:20 np0005481680 podman[300106]: 2025-10-12 21:40:20.251520184 +0000 UTC m=+0.273134952 container remove 253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_perlman, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:40:20 np0005481680 systemd[1]: libpod-conmon-253d413456678e95109f6275d6c978ba81ca2efe4360e97c24cb4be1a3b81564.scope: Deactivated successfully.
Oct 12 17:40:20 np0005481680 podman[300146]: 2025-10-12 21:40:20.519104283 +0000 UTC m=+0.072271171 container create 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 12 17:40:20 np0005481680 systemd[1]: Started libpod-conmon-283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723.scope.
Oct 12 17:40:20 np0005481680 podman[300146]: 2025-10-12 21:40:20.489335165 +0000 UTC m=+0.042502113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:20 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:20 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:20 np0005481680 podman[300146]: 2025-10-12 21:40:20.634975031 +0000 UTC m=+0.188141979 container init 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:40:20 np0005481680 podman[300146]: 2025-10-12 21:40:20.655771301 +0000 UTC m=+0.208938199 container start 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:40:20 np0005481680 podman[300146]: 2025-10-12 21:40:20.6604777 +0000 UTC m=+0.213644598 container attach 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:40:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:20.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:21 np0005481680 upbeat_heisenberg[300164]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:40:21 np0005481680 upbeat_heisenberg[300164]: --> All data devices are unavailable
Oct 12 17:40:21 np0005481680 systemd[1]: libpod-283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723.scope: Deactivated successfully.
Oct 12 17:40:21 np0005481680 podman[300146]: 2025-10-12 21:40:21.077036079 +0000 UTC m=+0.630202977 container died 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:40:21 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fe5e942f928982e926790e4e8ffb51efcecf0386c04372f1457feb4e2251c766-merged.mount: Deactivated successfully.
Oct 12 17:40:21 np0005481680 podman[300146]: 2025-10-12 21:40:21.141390897 +0000 UTC m=+0.694557795 container remove 283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:40:21 np0005481680 systemd[1]: libpod-conmon-283eb7069e900e16c5589b76aaae59d2a363985015158d949eaae907f92c7723.scope: Deactivated successfully.
Oct 12 17:40:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:21 np0005481680 podman[300287]: 2025-10-12 21:40:21.954316053 +0000 UTC m=+0.068880233 container create ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 12 17:40:22 np0005481680 systemd[1]: Started libpod-conmon-ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7.scope.
Oct 12 17:40:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:21.925498399 +0000 UTC m=+0.040062630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:22.069041123 +0000 UTC m=+0.183605323 container init ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:22.082955227 +0000 UTC m=+0.197519377 container start ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:22.087590974 +0000 UTC m=+0.202155154 container attach ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:40:22 np0005481680 wizardly_almeida[300303]: 167 167
Oct 12 17:40:22 np0005481680 systemd[1]: libpod-ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7.scope: Deactivated successfully.
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:22.0905731 +0000 UTC m=+0.205137280 container died ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:40:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ee76a43059a13db6c20cd638d26ae8bdd2d906185ed9d3727cce158fdaaea2d0-merged.mount: Deactivated successfully.
Oct 12 17:40:22 np0005481680 podman[300287]: 2025-10-12 21:40:22.14911227 +0000 UTC m=+0.263676440 container remove ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_almeida, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:40:22 np0005481680 systemd[1]: libpod-conmon-ab9f11b1deb514462d6212dbcbee8b2165ee1ac2b8542ee00e1cca116d36fbd7.scope: Deactivated successfully.
Oct 12 17:40:22 np0005481680 nova_compute[264665]: 2025-10-12 21:40:22.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.410521412 +0000 UTC m=+0.072275810 container create 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:40:22 np0005481680 systemd[1]: Started libpod-conmon-12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17.scope.
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.380826967 +0000 UTC m=+0.042581425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:22 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ee0bed56494b70527491d7b799051d35d847c1c9f3a7a780751994faa44feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ee0bed56494b70527491d7b799051d35d847c1c9f3a7a780751994faa44feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ee0bed56494b70527491d7b799051d35d847c1c9f3a7a780751994faa44feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:22 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ee0bed56494b70527491d7b799051d35d847c1c9f3a7a780751994faa44feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.536352174 +0000 UTC m=+0.198106602 container init 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.548590325 +0000 UTC m=+0.210344733 container start 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.553198313 +0000 UTC m=+0.214952721 container attach 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]: {
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:    "0": [
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:        {
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "devices": [
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "/dev/loop3"
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            ],
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "lv_name": "ceph_lv0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "lv_size": "21470642176",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "name": "ceph_lv0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "tags": {
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.cluster_name": "ceph",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.crush_device_class": "",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.encrypted": "0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.osd_id": "0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.type": "block",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.vdo": "0",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:                "ceph.with_tpm": "0"
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            },
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "type": "block",
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:            "vg_name": "ceph_vg0"
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:        }
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]:    ]
Oct 12 17:40:22 np0005481680 vigorous_wozniak[300345]: }
Oct 12 17:40:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:22.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:22 np0005481680 systemd[1]: libpod-12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17.scope: Deactivated successfully.
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.898508529 +0000 UTC m=+0.560262937 container died 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:40:22 np0005481680 systemd[1]: var-lib-containers-storage-overlay-87ee0bed56494b70527491d7b799051d35d847c1c9f3a7a780751994faa44feb-merged.mount: Deactivated successfully.
Oct 12 17:40:22 np0005481680 podman[300328]: 2025-10-12 21:40:22.965673138 +0000 UTC m=+0.627427516 container remove 12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 12 17:40:22 np0005481680 systemd[1]: libpod-conmon-12535d373df7077d647d43f0a20720a39cea4364bb75258eba3c3c93346d5b17.scope: Deactivated successfully.
Oct 12 17:40:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:23 np0005481680 nova_compute[264665]: 2025-10-12 21:40:23.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.826151625 +0000 UTC m=+0.078649323 container create 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:40:23 np0005481680 systemd[1]: Started libpod-conmon-089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b.scope.
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.792914009 +0000 UTC m=+0.045411777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:23 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.936466452 +0000 UTC m=+0.188964180 container init 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.949995836 +0000 UTC m=+0.202493524 container start 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.954413509 +0000 UTC m=+0.206911247 container attach 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:40:23 np0005481680 intelligent_ptolemy[300475]: 167 167
Oct 12 17:40:23 np0005481680 systemd[1]: libpod-089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b.scope: Deactivated successfully.
Oct 12 17:40:23 np0005481680 podman[300459]: 2025-10-12 21:40:23.960089723 +0000 UTC m=+0.212587421 container died 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 12 17:40:23 np0005481680 systemd[1]: var-lib-containers-storage-overlay-41c20c867b34d1df20b158d198b1bea0bffe603cacb0fd6d3a2d1126095a09db-merged.mount: Deactivated successfully.
Oct 12 17:40:24 np0005481680 podman[300459]: 2025-10-12 21:40:24.019455343 +0000 UTC m=+0.271953041 container remove 089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 12 17:40:24 np0005481680 systemd[1]: libpod-conmon-089c2d3a3365d79845d27d7d4995c906dc8dd051d850ad8338e71fca94e6066b.scope: Deactivated successfully.
Oct 12 17:40:24 np0005481680 podman[300499]: 2025-10-12 21:40:24.290987243 +0000 UTC m=+0.078175250 container create d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:40:24 np0005481680 podman[300499]: 2025-10-12 21:40:24.256584977 +0000 UTC m=+0.043773044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:40:24 np0005481680 systemd[1]: Started libpod-conmon-d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3.scope.
Oct 12 17:40:24 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:40:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77498b444468b2aa2b44dcf2311221cd358dc6809cdcaa14fbc6d9095a7a3b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77498b444468b2aa2b44dcf2311221cd358dc6809cdcaa14fbc6d9095a7a3b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77498b444468b2aa2b44dcf2311221cd358dc6809cdcaa14fbc6d9095a7a3b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:24 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77498b444468b2aa2b44dcf2311221cd358dc6809cdcaa14fbc6d9095a7a3b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:40:24 np0005481680 podman[300499]: 2025-10-12 21:40:24.408163585 +0000 UTC m=+0.195351602 container init d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 12 17:40:24 np0005481680 podman[300499]: 2025-10-12 21:40:24.418421396 +0000 UTC m=+0.205609413 container start d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:40:24 np0005481680 podman[300499]: 2025-10-12 21:40:24.422978621 +0000 UTC m=+0.210166648 container attach d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:40:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:25 np0005481680 lvm[300600]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:40:25 np0005481680 lvm[300600]: VG ceph_vg0 finished
Oct 12 17:40:25 np0005481680 silly_rhodes[300515]: {}
Oct 12 17:40:25 np0005481680 podman[300589]: 2025-10-12 21:40:25.288383323 +0000 UTC m=+0.106721027 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:40:25 np0005481680 systemd[1]: libpod-d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3.scope: Deactivated successfully.
Oct 12 17:40:25 np0005481680 podman[300499]: 2025-10-12 21:40:25.295929005 +0000 UTC m=+1.083116992 container died d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 12 17:40:25 np0005481680 systemd[1]: libpod-d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3.scope: Consumed 1.467s CPU time.
Oct 12 17:40:25 np0005481680 systemd[1]: var-lib-containers-storage-overlay-b77498b444468b2aa2b44dcf2311221cd358dc6809cdcaa14fbc6d9095a7a3b2-merged.mount: Deactivated successfully.
Oct 12 17:40:25 np0005481680 podman[300499]: 2025-10-12 21:40:25.353324866 +0000 UTC m=+1.140512883 container remove d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 12 17:40:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:25 np0005481680 systemd[1]: libpod-conmon-d46fe626e5fdecf07b4b2c8adb08d29ad699389a753522ea41435a1943dfc6f3.scope: Deactivated successfully.
Oct 12 17:40:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:40:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:40:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:26 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:40:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:27 np0005481680 nova_compute[264665]: 2025-10-12 21:40:27.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:27.319Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:28 np0005481680 nova_compute[264665]: 2025-10-12 21:40:28.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:28.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:32 np0005481680 podman[300684]: 2025-10-12 21:40:32.124960258 +0000 UTC m=+0.085162288 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:40:32 np0005481680 nova_compute[264665]: 2025-10-12 21:40:32.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:40:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:40:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:33.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:33 np0005481680 nova_compute[264665]: 2025-10-12 21:40:33.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:34 np0005481680 nova_compute[264665]: 2025-10-12 21:40:34.852 2 DEBUG oslo_concurrency.processutils [None req-33c348a5-fb27-48e3-b79a-a598ef49c22c 8c50b3381c914694aa92299a497cd5e0 e256cf69486e4f8b98a8da7fd5db38a5 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:40:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:34 np0005481680 nova_compute[264665]: 2025-10-12 21:40:34.887 2 DEBUG oslo_concurrency.processutils [None req-33c348a5-fb27-48e3-b79a-a598ef49c22c 8c50b3381c914694aa92299a497cd5e0 e256cf69486e4f8b98a8da7fd5db38a5 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:40:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:37.320Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:40:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:37.321Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:40:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:37.321Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:37 np0005481680 nova_compute[264665]: 2025-10-12 21:40:37.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:38 np0005481680 nova_compute[264665]: 2025-10-12 21:40:38.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:38.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:39.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:39 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:39.985 164459 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c6:02:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '96:8e:e5:fd:4e:19'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 12 17:40:39 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:39.986 164459 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 12 17:40:40 np0005481680 nova_compute[264665]: 2025-10-12 21:40:40.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:40.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:42] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:40:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:42] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:40:42 np0005481680 nova_compute[264665]: 2025-10-12 21:40:42.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:42.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:43.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:43 np0005481680 nova_compute[264665]: 2025-10-12 21:40:43.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:45 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:40:45.988 164459 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=4fd585ac-c8a3-45e9-b563-f151bc390e2e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 12 17:40:46 np0005481680 nova_compute[264665]: 2025-10-12 21:40:46.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:47 np0005481680 podman[300718]: 2025-10-12 21:40:47.13238447 +0000 UTC m=+0.092192387 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:40:47 np0005481680 podman[300719]: 2025-10-12 21:40:47.179453777 +0000 UTC m=+0.134618516 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 12 17:40:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:47.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:47 np0005481680 nova_compute[264665]: 2025-10-12 21:40:47.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:47.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:40:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:40:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:40:48 np0005481680 nova_compute[264665]: 2025-10-12 21:40:48.658 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:48 np0005481680 nova_compute[264665]: 2025-10-12 21:40:48.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:48 np0005481680 nova_compute[264665]: 2025-10-12 21:40:48.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:48 np0005481680 nova_compute[264665]: 2025-10-12 21:40:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:48.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:48.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:49 np0005481680 nova_compute[264665]: 2025-10-12 21:40:49.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:49 np0005481680 nova_compute[264665]: 2025-10-12 21:40:49.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:40:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:49.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.698 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.698 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.699 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.699 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:40:51 np0005481680 nova_compute[264665]: 2025-10-12 21:40:51.700 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:40:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:40:52] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:40:52 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:40:52 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938711353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.218 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.478 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.481 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4501MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.481 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.481 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.599 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.600 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:40:52 np0005481680 nova_compute[264665]: 2025-10-12 21:40:52.622 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:40:52 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:52 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:52 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:52.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:53 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:40:53 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081428060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.137 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.142 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.156 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.157 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.157 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:40:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:53.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:53 np0005481680 nova_compute[264665]: 2025-10-12 21:40:53.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:54 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:54 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:54 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:54.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:40:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:40:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:40:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:55.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:40:56 np0005481680 podman[300843]: 2025-10-12 21:40:56.138705117 +0000 UTC m=+0.092493195 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 12 17:40:56 np0005481680 nova_compute[264665]: 2025-10-12 21:40:56.157 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:56 np0005481680 nova_compute[264665]: 2025-10-12 21:40:56.158 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:56 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:56 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:56 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:57.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:40:57 np0005481680 nova_compute[264665]: 2025-10-12 21:40:57.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:57 np0005481680 nova_compute[264665]: 2025-10-12 21:40:57.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:40:57 np0005481680 nova_compute[264665]: 2025-10-12 21:40:57.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:40:57 np0005481680 nova_compute[264665]: 2025-10-12 21:40:57.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:40:57 np0005481680 nova_compute[264665]: 2025-10-12 21:40:57.687 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:40:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:58.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:40:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:58.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:40:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:40:58.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:40:58 np0005481680 nova_compute[264665]: 2025-10-12 21:40:58.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:40:58 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:58 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:40:58 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:40:58.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:40:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:40:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:40:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:40:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:40:59.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:00 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:00 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:00 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:41:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:02] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:41:02 np0005481680 nova_compute[264665]: 2025-10-12 21:41:02.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:02 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:02 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:02 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:03 np0005481680 podman[300871]: 2025-10-12 21:41:03.132087226 +0000 UTC m=+0.090521425 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 12 17:41:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:41:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:41:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:03.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:03 np0005481680 nova_compute[264665]: 2025-10-12 21:41:03.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:04 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:04 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:04 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:06 np0005481680 nova_compute[264665]: 2025-10-12 21:41:06.683 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:06 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:06 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:06 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:06.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:07.324Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:07 np0005481680 nova_compute[264665]: 2025-10-12 21:41:07.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:08 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:08 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:08 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:08 np0005481680 nova_compute[264665]: 2025-10-12 21:41:08.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:10 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:10 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:10 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:12] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:41:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:12] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:41:12 np0005481680 nova_compute[264665]: 2025-10-12 21:41:12.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:12 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:12 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:12 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:13 np0005481680 nova_compute[264665]: 2025-10-12 21:41:13.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:14 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:14 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:14 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:41:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:41:16 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:16 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:16 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:16.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:17.325Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:17 np0005481680 nova_compute[264665]: 2025-10-12 21:41:17.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:17.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:18 np0005481680 podman[300932]: 2025-10-12 21:41:18.128468665 +0000 UTC m=+0.086020768 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Oct 12 17:41:18 np0005481680 podman[300933]: 2025-10-12 21:41:18.177414961 +0000 UTC m=+0.135418956 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:41:18
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', '.nfs', 'volumes', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:41:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:41:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:41:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:41:18.380 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:41:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:41:18.381 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:41:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:41:18.381 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:18 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:18 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:18 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:18.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:18 np0005481680 nova_compute[264665]: 2025-10-12 21:41:18.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:41:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:19.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:20 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:20 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:20 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:21.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:22] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:41:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:22] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:41:22 np0005481680 nova_compute[264665]: 2025-10-12 21:41:22.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:22 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:22 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:22 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:23.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:23 np0005481680 nova_compute[264665]: 2025-10-12 21:41:23.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:24 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:24 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:24 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:25.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct 12 17:41:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:26 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:26 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:26 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:27 np0005481680 podman[301067]: 2025-10-12 21:41:27.137128669 +0000 UTC m=+0.095733366 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 12 17:41:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:27.326Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:27 np0005481680 nova_compute[264665]: 2025-10-12 21:41:27.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:27 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:27.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:28.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:28 np0005481680 nova_compute[264665]: 2025-10-12 21:41:28.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:28 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:28 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:28 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:29 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:41:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.06598176 +0000 UTC m=+0.072199017 container create 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:41:30 np0005481680 systemd[1]: Started libpod-conmon-25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71.scope.
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.036207753 +0000 UTC m=+0.042425060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.181005986 +0000 UTC m=+0.187223303 container init 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.192803817 +0000 UTC m=+0.199021064 container start 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.196896431 +0000 UTC m=+0.203113698 container attach 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:41:30 np0005481680 admiring_moser[301223]: 167 167
Oct 12 17:41:30 np0005481680 systemd[1]: libpod-25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71.scope: Deactivated successfully.
Oct 12 17:41:30 np0005481680 conmon[301223]: conmon 25d62ba1a4f70a8f2ad2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71.scope/container/memory.events
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.203441277 +0000 UTC m=+0.209658534 container died 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:41:30 np0005481680 systemd[1]: var-lib-containers-storage-overlay-437a06b095083a59fd54e6b92d6e988afe65bc96691e2a6b14263ee703cab3cd-merged.mount: Deactivated successfully.
Oct 12 17:41:30 np0005481680 podman[301207]: 2025-10-12 21:41:30.268309468 +0000 UTC m=+0.274526715 container remove 25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 12 17:41:30 np0005481680 systemd[1]: libpod-conmon-25d62ba1a4f70a8f2ad244f567711ba7a1a0aa530a77716030c6a086b6a70e71.scope: Deactivated successfully.
Oct 12 17:41:30 np0005481680 podman[301248]: 2025-10-12 21:41:30.520211786 +0000 UTC m=+0.066200756 container create 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:41:30 np0005481680 systemd[1]: Started libpod-conmon-4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7.scope.
Oct 12 17:41:30 np0005481680 podman[301248]: 2025-10-12 21:41:30.494745738 +0000 UTC m=+0.040734718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:30 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:30 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:30 np0005481680 podman[301248]: 2025-10-12 21:41:30.615521921 +0000 UTC m=+0.161510941 container init 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 12 17:41:30 np0005481680 podman[301248]: 2025-10-12 21:41:30.626921931 +0000 UTC m=+0.172910911 container start 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 12 17:41:30 np0005481680 podman[301248]: 2025-10-12 21:41:30.631657011 +0000 UTC m=+0.177645991 container attach 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:41:30 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:30 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:41:30 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:41:31 np0005481680 naughty_chatelet[301265]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:41:31 np0005481680 naughty_chatelet[301265]: --> All data devices are unavailable
Oct 12 17:41:31 np0005481680 systemd[1]: libpod-4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7.scope: Deactivated successfully.
Oct 12 17:41:31 np0005481680 podman[301248]: 2025-10-12 21:41:31.05966815 +0000 UTC m=+0.605657120 container died 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:41:31 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d2b203d410cfbf37dd425b6747b2bf6973504cc44f8a215012feecfa5257cc07-merged.mount: Deactivated successfully.
Oct 12 17:41:31 np0005481680 podman[301248]: 2025-10-12 21:41:31.120006055 +0000 UTC m=+0.665995045 container remove 4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:41:31 np0005481680 systemd[1]: libpod-conmon-4386b019b549dc9efa13f349275f056d122553f67b17d2253618ca48eb5a39d7.scope: Deactivated successfully.
Oct 12 17:41:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:31.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:31 np0005481680 podman[301390]: 2025-10-12 21:41:31.891683236 +0000 UTC m=+0.070173866 container create 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:41:31 np0005481680 systemd[1]: Started libpod-conmon-366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6.scope.
Oct 12 17:41:31 np0005481680 podman[301390]: 2025-10-12 21:41:31.862972086 +0000 UTC m=+0.041462786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:31 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:31 np0005481680 podman[301390]: 2025-10-12 21:41:31.992612354 +0000 UTC m=+0.171103034 container init 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:41:32 np0005481680 podman[301390]: 2025-10-12 21:41:32.000974797 +0000 UTC m=+0.179465397 container start 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:41:32 np0005481680 podman[301390]: 2025-10-12 21:41:32.004736443 +0000 UTC m=+0.183227103 container attach 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:41:32 np0005481680 elegant_swartz[301406]: 167 167
Oct 12 17:41:32 np0005481680 podman[301390]: 2025-10-12 21:41:32.007054712 +0000 UTC m=+0.185545362 container died 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 12 17:41:32 np0005481680 systemd[1]: libpod-366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6.scope: Deactivated successfully.
Oct 12 17:41:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:32] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:41:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:32] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Oct 12 17:41:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ea8d851e8eca8512a92eedc116f8b88d1ac645aea16dac8c76ac8054f8b8dfb5-merged.mount: Deactivated successfully.
Oct 12 17:41:32 np0005481680 podman[301390]: 2025-10-12 21:41:32.065718944 +0000 UTC m=+0.244209564 container remove 366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_swartz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct 12 17:41:32 np0005481680 systemd[1]: libpod-conmon-366de5d8c0497871a97ee77e56a36a2da777ea9d3e6849545c5b85ce56fa8da6.scope: Deactivated successfully.
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.289725372 +0000 UTC m=+0.066899212 container create 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:41:32 np0005481680 systemd[1]: Started libpod-conmon-6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe.scope.
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.261465834 +0000 UTC m=+0.038639724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:32 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40040dffd7396970dc0deba4cd74550751bf63e8ce2ce5763048024456c8b93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40040dffd7396970dc0deba4cd74550751bf63e8ce2ce5763048024456c8b93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40040dffd7396970dc0deba4cd74550751bf63e8ce2ce5763048024456c8b93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:32 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40040dffd7396970dc0deba4cd74550751bf63e8ce2ce5763048024456c8b93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:32 np0005481680 nova_compute[264665]: 2025-10-12 21:41:32.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.399089205 +0000 UTC m=+0.176263085 container init 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.414341043 +0000 UTC m=+0.191514893 container start 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.418405856 +0000 UTC m=+0.195579756 container attach 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]: {
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:    "0": [
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:        {
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "devices": [
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "/dev/loop3"
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            ],
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "lv_name": "ceph_lv0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "lv_size": "21470642176",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "name": "ceph_lv0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "tags": {
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.cluster_name": "ceph",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.crush_device_class": "",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.encrypted": "0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.osd_id": "0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.type": "block",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.vdo": "0",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:                "ceph.with_tpm": "0"
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            },
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "type": "block",
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:            "vg_name": "ceph_vg0"
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:        }
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]:    ]
Oct 12 17:41:32 np0005481680 infallible_knuth[301445]: }
Oct 12 17:41:32 np0005481680 systemd[1]: libpod-6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe.scope: Deactivated successfully.
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.757352189 +0000 UTC m=+0.534526029 container died 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 12 17:41:32 np0005481680 systemd[1]: var-lib-containers-storage-overlay-f40040dffd7396970dc0deba4cd74550751bf63e8ce2ce5763048024456c8b93-merged.mount: Deactivated successfully.
Oct 12 17:41:32 np0005481680 podman[301428]: 2025-10-12 21:41:32.806854239 +0000 UTC m=+0.584028089 container remove 6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 17:41:32 np0005481680 systemd[1]: libpod-conmon-6ca0bce4ae2bb1b2a6fcd642e1e3f633b1b97b234577d38dbe1d60be740a96fe.scope: Deactivated successfully.
Oct 12 17:41:32 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:32 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:32 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:41:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:41:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.554349985 +0000 UTC m=+0.065516128 container create 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 12 17:41:33 np0005481680 systemd[1]: Started libpod-conmon-91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551.scope.
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.528177819 +0000 UTC m=+0.039343942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:33 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.655466078 +0000 UTC m=+0.166632261 container init 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.667644088 +0000 UTC m=+0.178810231 container start 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.672014889 +0000 UTC m=+0.183181032 container attach 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 12 17:41:33 np0005481680 quizzical_lovelace[301575]: 167 167
Oct 12 17:41:33 np0005481680 systemd[1]: libpod-91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551.scope: Deactivated successfully.
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.677754035 +0000 UTC m=+0.188920178 container died 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 12 17:41:33 np0005481680 systemd[1]: var-lib-containers-storage-overlay-e17c2c9616577ecc0c1a5c0b309bf1e660d87e9e8006ccc59ca3c2155bdb34bf-merged.mount: Deactivated successfully.
Oct 12 17:41:33 np0005481680 podman[301557]: 2025-10-12 21:41:33.739176447 +0000 UTC m=+0.250342560 container remove 91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 12 17:41:33 np0005481680 podman[301572]: 2025-10-12 21:41:33.749354616 +0000 UTC m=+0.126636082 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 12 17:41:33 np0005481680 systemd[1]: libpod-conmon-91ffb82591e1762adc325b34c74dbeaf236933bbeff15293feb113bcae2d0551.scope: Deactivated successfully.
Oct 12 17:41:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:33.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:33 np0005481680 nova_compute[264665]: 2025-10-12 21:41:33.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:34 np0005481680 podman[301615]: 2025-10-12 21:41:34.028541398 +0000 UTC m=+0.079737479 container create 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 12 17:41:34 np0005481680 systemd[1]: Started libpod-conmon-36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466.scope.
Oct 12 17:41:34 np0005481680 podman[301615]: 2025-10-12 21:41:33.994288327 +0000 UTC m=+0.045484478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:41:34 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:41:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3190bf82d70bba4ee0e967bd3b116d4966bf0d89cc4d0f414206c8eaf4f06c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3190bf82d70bba4ee0e967bd3b116d4966bf0d89cc4d0f414206c8eaf4f06c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3190bf82d70bba4ee0e967bd3b116d4966bf0d89cc4d0f414206c8eaf4f06c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:34 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3190bf82d70bba4ee0e967bd3b116d4966bf0d89cc4d0f414206c8eaf4f06c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:41:34 np0005481680 podman[301615]: 2025-10-12 21:41:34.140969629 +0000 UTC m=+0.192165760 container init 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:41:34 np0005481680 podman[301615]: 2025-10-12 21:41:34.15476473 +0000 UTC m=+0.205960811 container start 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:41:34 np0005481680 podman[301615]: 2025-10-12 21:41:34.159204953 +0000 UTC m=+0.210401034 container attach 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:41:34 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:34 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:34 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:34.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:34 np0005481680 lvm[301707]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:41:34 np0005481680 lvm[301707]: VG ceph_vg0 finished
Oct 12 17:41:34 np0005481680 optimistic_elion[301632]: {}
Oct 12 17:41:35 np0005481680 systemd[1]: libpod-36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466.scope: Deactivated successfully.
Oct 12 17:41:35 np0005481680 podman[301615]: 2025-10-12 21:41:35.037003114 +0000 UTC m=+1.088199195 container died 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 12 17:41:35 np0005481680 systemd[1]: libpod-36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466.scope: Consumed 1.580s CPU time.
Oct 12 17:41:35 np0005481680 systemd[1]: var-lib-containers-storage-overlay-d3190bf82d70bba4ee0e967bd3b116d4966bf0d89cc4d0f414206c8eaf4f06c3-merged.mount: Deactivated successfully.
Oct 12 17:41:35 np0005481680 podman[301615]: 2025-10-12 21:41:35.097979575 +0000 UTC m=+1.149175656 container remove 36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 12 17:41:35 np0005481680 systemd[1]: libpod-conmon-36396b34a01a681e139d8578bf15ed7b89ff1adb1ac33dd1941424d52c573466.scope: Deactivated successfully.
Oct 12 17:41:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:41:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:41:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:35.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:41:36 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:36 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:36 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:37.327Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:37 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:37 np0005481680 nova_compute[264665]: 2025-10-12 21:41:37.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:37.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:38.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:38 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:38 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:38 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:39 np0005481680 nova_compute[264665]: 2025-10-12 21:41:39.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:39 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:39.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:40 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:40 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:40 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:41 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:42] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:41:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:42] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:41:42 np0005481680 nova_compute[264665]: 2025-10-12 21:41:42.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:42 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:42 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:42 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:43 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:43.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:44 np0005481680 nova_compute[264665]: 2025-10-12 21:41:44.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:44 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:44 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:44 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:45 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:46 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:46 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:46 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:47.331Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:41:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:47.332Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:41:47 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:47 np0005481680 nova_compute[264665]: 2025-10-12 21:41:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:47 np0005481680 nova_compute[264665]: 2025-10-12 21:41:47.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:41:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:41:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:41:48 np0005481680 nova_compute[264665]: 2025-10-12 21:41:48.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:48.922Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:41:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:48.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:48 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:48 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:48 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:49 np0005481680 nova_compute[264665]: 2025-10-12 21:41:49.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:49 np0005481680 podman[301785]: 2025-10-12 21:41:49.154300902 +0000 UTC m=+0.103366119 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 12 17:41:49 np0005481680 podman[301786]: 2025-10-12 21:41:49.207608169 +0000 UTC m=+0.150833798 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 12 17:41:49 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:49 np0005481680 nova_compute[264665]: 2025-10-12 21:41:49.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:49 np0005481680 nova_compute[264665]: 2025-10-12 21:41:49.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:49 np0005481680 nova_compute[264665]: 2025-10-12 21:41:49.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:41:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:50 np0005481680 nova_compute[264665]: 2025-10-12 21:41:50.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:50 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:50 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:50 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:51 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:51.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:52] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:41:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:41:52] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:41:52 np0005481680 nova_compute[264665]: 2025-10-12 21:41:52.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:53 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.689 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.690 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.691 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:41:53 np0005481680 nova_compute[264665]: 2025-10-12 21:41:53.691 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:41:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:53.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:41:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336238810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.156 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.315 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.317 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.318 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.318 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.387 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.388 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.429 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:41:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:41:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2949401800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.923 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.931 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.952 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.955 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:41:54 np0005481680 nova_compute[264665]: 2025-10-12 21:41:54.955 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:41:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.187309) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315187350, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1363, "num_deletes": 258, "total_data_size": 2459375, "memory_usage": 2495072, "flush_reason": "Manual Compaction"}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315204479, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2396619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36890, "largest_seqno": 38252, "table_properties": {"data_size": 2390282, "index_size": 3536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13722, "raw_average_key_size": 19, "raw_value_size": 2377320, "raw_average_value_size": 3450, "num_data_blocks": 153, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760305193, "oldest_key_time": 1760305193, "file_creation_time": 1760305315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17246 microseconds, and 10303 cpu microseconds.
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.204546) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2396619 bytes OK
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.204578) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.206612) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.206638) EVENT_LOG_v1 {"time_micros": 1760305315206630, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.206663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2453428, prev total WAL file size 2453428, number of live WAL files 2.
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.208663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2340KB)], [80(11MB)]
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315208710, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14862502, "oldest_snapshot_seqno": -1}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6819 keys, 14699787 bytes, temperature: kUnknown
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315284557, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14699787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14655004, "index_size": 26623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179336, "raw_average_key_size": 26, "raw_value_size": 14532861, "raw_average_value_size": 2131, "num_data_blocks": 1051, "num_entries": 6819, "num_filter_entries": 6819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760305315, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.284811) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14699787 bytes
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.286220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.8 rd, 193.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.9 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.3) write-amplify(6.1) OK, records in: 7353, records dropped: 534 output_compression: NoCompression
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.286241) EVENT_LOG_v1 {"time_micros": 1760305315286231, "job": 46, "event": "compaction_finished", "compaction_time_micros": 75909, "compaction_time_cpu_micros": 50615, "output_level": 6, "num_output_files": 1, "total_output_size": 14699787, "num_input_records": 7353, "num_output_records": 6819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315286770, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305315289585, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.208555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.289632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.289639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.289642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.289645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:41:55.289648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:41:55 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:56 np0005481680 nova_compute[264665]: 2025-10-12 21:41:56.956 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:56 np0005481680 nova_compute[264665]: 2025-10-12 21:41:56.957 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:41:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:41:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:57.333Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:57 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:57 np0005481680 nova_compute[264665]: 2025-10-12 21:41:57.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:58 np0005481680 podman[301886]: 2025-10-12 21:41:58.09833883 +0000 UTC m=+0.066839751 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 12 17:41:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:41:58.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:41:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:41:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:41:59 np0005481680 nova_compute[264665]: 2025-10-12 21:41:59.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:41:59 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:41:59 np0005481680 nova_compute[264665]: 2025-10-12 21:41:59.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:41:59 np0005481680 nova_compute[264665]: 2025-10-12 21:41:59.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:41:59 np0005481680 nova_compute[264665]: 2025-10-12 21:41:59.664 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:41:59 np0005481680 nova_compute[264665]: 2025-10-12 21:41:59.682 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:41:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:41:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:41:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:41:59.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:01 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:01.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:02] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:42:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:02] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Oct 12 17:42:02 np0005481680 nova_compute[264665]: 2025-10-12 21:42:02.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:42:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:42:03 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:04 np0005481680 nova_compute[264665]: 2025-10-12 21:42:04.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:04 np0005481680 podman[301912]: 2025-10-12 21:42:04.101270345 +0000 UTC m=+0.068704649 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:42:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:05 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:07.334Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:07 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:07 np0005481680 nova_compute[264665]: 2025-10-12 21:42:07.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:08.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:42:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:08.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:09 np0005481680 nova_compute[264665]: 2025-10-12 21:42:09.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:09 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:11 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:11.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:12] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 12 17:42:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:12] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Oct 12 17:42:12 np0005481680 nova_compute[264665]: 2025-10-12 21:42:12.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:13 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:13.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:14 np0005481680 nova_compute[264665]: 2025-10-12 21:42:14.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:15 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:17.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:17.335Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:17 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:17 np0005481680 nova_compute[264665]: 2025-10-12 21:42:17.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:17.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:42:18
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.nfs', '.rgw.root', 'images', '.mgr', 'backups', 'vms']
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:42:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:42:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:42:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:42:18.381 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:42:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:42:18.382 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:42:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:42:18.382 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:18.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:19 np0005481680 nova_compute[264665]: 2025-10-12 21:42:19.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:42:19 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:20 np0005481680 podman[301972]: 2025-10-12 21:42:20.145900282 +0000 UTC m=+0.100434557 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 12 17:42:20 np0005481680 podman[301973]: 2025-10-12 21:42:20.188458434 +0000 UTC m=+0.141410318 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 12 17:42:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:21 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:21.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:22] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:22 np0005481680 nova_compute[264665]: 2025-10-12 21:42:22.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:23 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:24 np0005481680 nova_compute[264665]: 2025-10-12 21:42:24.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:25.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:25 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:27.335Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:42:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:27.336Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:42:27 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:27 np0005481680 nova_compute[264665]: 2025-10-12 21:42:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:28 np0005481680 podman[302050]: 2025-10-12 21:42:28.310789437 +0000 UTC m=+0.071938000 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 12 17:42:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:28.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:29 np0005481680 nova_compute[264665]: 2025-10-12 21:42:29.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:29 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:29.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:31 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:31.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:32] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:32 np0005481680 nova_compute[264665]: 2025-10-12 21:42:32.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:42:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:42:33 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:34 np0005481680 nova_compute[264665]: 2025-10-12 21:42:34.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:35 np0005481680 podman[302077]: 2025-10-12 21:42:35.110504152 +0000 UTC m=+0.071515460 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 12 17:42:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:35 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:36 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 12 17:42:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:37.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.198962123 +0000 UTC m=+0.051965504 container create 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 17:42:37 np0005481680 systemd[1]: Started libpod-conmon-48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28.scope.
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.176572644 +0000 UTC m=+0.029576055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.297358966 +0000 UTC m=+0.150362387 container init 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.309448233 +0000 UTC m=+0.162451614 container start 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.313087506 +0000 UTC m=+0.166090927 container attach 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:42:37 np0005481680 cool_zhukovsky[302290]: 167 167
Oct 12 17:42:37 np0005481680 systemd[1]: libpod-48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28.scope: Deactivated successfully.
Oct 12 17:42:37 np0005481680 conmon[302290]: conmon 48767009360c3493dadc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28.scope/container/memory.events
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.320643738 +0000 UTC m=+0.173647149 container died 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 12 17:42:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:37.336Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:42:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:37.337Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:37 np0005481680 systemd[1]: var-lib-containers-storage-overlay-17f5fe37a5e2f6b1aa23cbad390ad53dc212ec69a27376cb077003340da03e0a-merged.mount: Deactivated successfully.
Oct 12 17:42:37 np0005481680 podman[302274]: 2025-10-12 21:42:37.378965822 +0000 UTC m=+0.231969223 container remove 48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_zhukovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:42:37 np0005481680 systemd[1]: libpod-conmon-48767009360c3493dadcc3320baa30316177fbb5eca359ded74cf3dd8a232c28.scope: Deactivated successfully.
Oct 12 17:42:37 np0005481680 nova_compute[264665]: 2025-10-12 21:42:37.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:37 np0005481680 podman[302314]: 2025-10-12 21:42:37.629400853 +0000 UTC m=+0.059564596 container create cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:42:37 np0005481680 systemd[1]: Started libpod-conmon-cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62.scope.
Oct 12 17:42:37 np0005481680 podman[302314]: 2025-10-12 21:42:37.611629961 +0000 UTC m=+0.041793714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:37 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:37 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:37 np0005481680 podman[302314]: 2025-10-12 21:42:37.752146946 +0000 UTC m=+0.182310729 container init cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:42:37 np0005481680 podman[302314]: 2025-10-12 21:42:37.767107096 +0000 UTC m=+0.197270839 container start cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Oct 12 17:42:37 np0005481680 podman[302314]: 2025-10-12 21:42:37.771858228 +0000 UTC m=+0.202022061 container attach cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:37.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:38 np0005481680 affectionate_chandrasekhar[302330]: --> passed data devices: 0 physical, 1 LVM
Oct 12 17:42:38 np0005481680 affectionate_chandrasekhar[302330]: --> All data devices are unavailable
Oct 12 17:42:38 np0005481680 systemd[1]: libpod-cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62.scope: Deactivated successfully.
Oct 12 17:42:38 np0005481680 podman[302314]: 2025-10-12 21:42:38.239022532 +0000 UTC m=+0.669186315 container died cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:38 np0005481680 systemd[1]: var-lib-containers-storage-overlay-82d2f7bcaa8d4ee7856889fdc344f1e94e0584c1b992757ecb0cfe67cc7a4750-merged.mount: Deactivated successfully.
Oct 12 17:42:38 np0005481680 podman[302314]: 2025-10-12 21:42:38.307761641 +0000 UTC m=+0.737925394 container remove cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 12 17:42:38 np0005481680 systemd[1]: libpod-conmon-cd95efb66d5f7ba87bd29b5d8895e317bdec991b8cdaf8fe5c0604e95b7f3f62.scope: Deactivated successfully.
Oct 12 17:42:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:38 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:38.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:39.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.19607527 +0000 UTC m=+0.075354578 container create b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 12 17:42:39 np0005481680 nova_compute[264665]: 2025-10-12 21:42:39.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.164752373 +0000 UTC m=+0.044031761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:39 np0005481680 systemd[1]: Started libpod-conmon-b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0.scope.
Oct 12 17:42:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.319434978 +0000 UTC m=+0.198714356 container init b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.329686748 +0000 UTC m=+0.208966046 container start b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.33409498 +0000 UTC m=+0.213374368 container attach b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 12 17:42:39 np0005481680 quirky_herschel[302467]: 167 167
Oct 12 17:42:39 np0005481680 systemd[1]: libpod-b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0.scope: Deactivated successfully.
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.336506842 +0000 UTC m=+0.215786140 container died b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 12 17:42:39 np0005481680 systemd[1]: var-lib-containers-storage-overlay-9b30da6687f1eb06dc9439e9ce61cf8acf2b20e715190313e3587338b23b40c4-merged.mount: Deactivated successfully.
Oct 12 17:42:39 np0005481680 podman[302450]: 2025-10-12 21:42:39.395242086 +0000 UTC m=+0.274521414 container remove b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:39 np0005481680 systemd[1]: libpod-conmon-b7fb8ed495328ae718d7cbb748071109b1edfbd9b91d9ee52d4d3039ca92e1a0.scope: Deactivated successfully.
Oct 12 17:42:39 np0005481680 podman[302490]: 2025-10-12 21:42:39.652897841 +0000 UTC m=+0.069985051 container create c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:42:39 np0005481680 systemd[1]: Started libpod-conmon-c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca.scope.
Oct 12 17:42:39 np0005481680 podman[302490]: 2025-10-12 21:42:39.624021667 +0000 UTC m=+0.041108917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:39 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e02ab1b5fe6e33ee183820e5b29bdccbafcc3a01723f8281c7834f7e51390f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e02ab1b5fe6e33ee183820e5b29bdccbafcc3a01723f8281c7834f7e51390f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e02ab1b5fe6e33ee183820e5b29bdccbafcc3a01723f8281c7834f7e51390f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:39 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e02ab1b5fe6e33ee183820e5b29bdccbafcc3a01723f8281c7834f7e51390f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:39 np0005481680 podman[302490]: 2025-10-12 21:42:39.769053916 +0000 UTC m=+0.186141166 container init c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:42:39 np0005481680 podman[302490]: 2025-10-12 21:42:39.781879132 +0000 UTC m=+0.198966332 container start c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 12 17:42:39 np0005481680 podman[302490]: 2025-10-12 21:42:39.786362946 +0000 UTC m=+0.203450156 container attach c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:39 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:39 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:39 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]: {
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:    "0": [
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:        {
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "devices": [
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "/dev/loop3"
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            ],
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "lv_name": "ceph_lv0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "lv_size": "21470642176",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5adb8c35-1b74-5730-a252-62321f654cd5,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=47abdfbc-9d1c-416d-8d2d-2f925f341a02,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "lv_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "name": "ceph_lv0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "tags": {
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.block_uuid": "5YdkUR-pfQU-wC1l-GKJF-g7Hd-vZgq-zVr4rN",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.cephx_lockbox_secret": "",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.cluster_fsid": "5adb8c35-1b74-5730-a252-62321f654cd5",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.cluster_name": "ceph",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.crush_device_class": "",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.encrypted": "0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.osd_fsid": "47abdfbc-9d1c-416d-8d2d-2f925f341a02",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.osd_id": "0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.type": "block",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.vdo": "0",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:                "ceph.with_tpm": "0"
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            },
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "type": "block",
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:            "vg_name": "ceph_vg0"
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:        }
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]:    ]
Oct 12 17:42:40 np0005481680 focused_grothendieck[302507]: }
Oct 12 17:42:40 np0005481680 systemd[1]: libpod-c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca.scope: Deactivated successfully.
Oct 12 17:42:40 np0005481680 podman[302490]: 2025-10-12 21:42:40.16619714 +0000 UTC m=+0.583284340 container died c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 12 17:42:40 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:40 np0005481680 systemd[1]: var-lib-containers-storage-overlay-41e02ab1b5fe6e33ee183820e5b29bdccbafcc3a01723f8281c7834f7e51390f-merged.mount: Deactivated successfully.
Oct 12 17:42:40 np0005481680 podman[302490]: 2025-10-12 21:42:40.228983828 +0000 UTC m=+0.646071028 container remove c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:40 np0005481680 systemd[1]: libpod-conmon-c1b7f725a2ee89c255bf0a30e7744a4d651a2de4ae4129e4d840db8cbe94e3ca.scope: Deactivated successfully.
Oct 12 17:42:40 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.004785374 +0000 UTC m=+0.055960365 container create 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 12 17:42:41 np0005481680 systemd[1]: Started libpod-conmon-5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb.scope.
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:40.980038364 +0000 UTC m=+0.031213455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:41.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:41 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.121474832 +0000 UTC m=+0.172649923 container init 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.132243026 +0000 UTC m=+0.183418057 container start 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.136762241 +0000 UTC m=+0.187937272 container attach 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:42:41 np0005481680 fervent_poitras[302637]: 167 167
Oct 12 17:42:41 np0005481680 systemd[1]: libpod-5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb.scope: Deactivated successfully.
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.141366568 +0000 UTC m=+0.192541599 container died 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 12 17:42:41 np0005481680 systemd[1]: var-lib-containers-storage-overlay-fb98b7be7d6fe850c31c8e599bd6d4377ddff1272dfc91e8ded253ae644752dd-merged.mount: Deactivated successfully.
Oct 12 17:42:41 np0005481680 podman[302621]: 2025-10-12 21:42:41.188760994 +0000 UTC m=+0.239935985 container remove 5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_poitras, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 12 17:42:41 np0005481680 systemd[1]: libpod-conmon-5de87000bf7951a993eb2e2f1265cc66c153bde44402fab0f1bd37cb67a465bb.scope: Deactivated successfully.
Oct 12 17:42:41 np0005481680 podman[302660]: 2025-10-12 21:42:41.450215116 +0000 UTC m=+0.068298309 container create 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 12 17:42:41 np0005481680 systemd[1]: Started libpod-conmon-6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c.scope.
Oct 12 17:42:41 np0005481680 podman[302660]: 2025-10-12 21:42:41.421897355 +0000 UTC m=+0.039980598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 12 17:42:41 np0005481680 systemd[1]: Started libcrun container.
Oct 12 17:42:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad376f3215ccafd14647c7d1d9a10e4cc96b52dd5f6dad3126d0becd53886851/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad376f3215ccafd14647c7d1d9a10e4cc96b52dd5f6dad3126d0becd53886851/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad376f3215ccafd14647c7d1d9a10e4cc96b52dd5f6dad3126d0becd53886851/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:41 np0005481680 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad376f3215ccafd14647c7d1d9a10e4cc96b52dd5f6dad3126d0becd53886851/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 12 17:42:41 np0005481680 podman[302660]: 2025-10-12 21:42:41.567650973 +0000 UTC m=+0.185734156 container init 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 12 17:42:41 np0005481680 podman[302660]: 2025-10-12 21:42:41.587274133 +0000 UTC m=+0.205357286 container start 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 12 17:42:41 np0005481680 podman[302660]: 2025-10-12 21:42:41.592357481 +0000 UTC m=+0.210440704 container attach 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:42:41 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:41 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:41 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:42 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:42] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:42 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:42] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Oct 12 17:42:42 np0005481680 lvm[302753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:42:42 np0005481680 lvm[302753]: VG ceph_vg0 finished
Oct 12 17:42:42 np0005481680 agitated_lumiere[302677]: {}
Oct 12 17:42:42 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:42 np0005481680 systemd[1]: libpod-6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c.scope: Deactivated successfully.
Oct 12 17:42:42 np0005481680 systemd[1]: libpod-6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c.scope: Consumed 1.418s CPU time.
Oct 12 17:42:42 np0005481680 podman[302660]: 2025-10-12 21:42:42.453112559 +0000 UTC m=+1.071195762 container died 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 12 17:42:42 np0005481680 nova_compute[264665]: 2025-10-12 21:42:42.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:42 np0005481680 systemd[1]: var-lib-containers-storage-overlay-ad376f3215ccafd14647c7d1d9a10e4cc96b52dd5f6dad3126d0becd53886851-merged.mount: Deactivated successfully.
Oct 12 17:42:42 np0005481680 podman[302660]: 2025-10-12 21:42:42.523013118 +0000 UTC m=+1.141096281 container remove 6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 12 17:42:42 np0005481680 systemd[1]: libpod-conmon-6bdb8207f8a8f67552956fe3d31016b22e279e2c4f6763fbeb9fb5347d734e1c.scope: Deactivated successfully.
Oct 12 17:42:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct 12 17:42:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:42 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct 12 17:42:42 np0005481680 ceph-mon[73608]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:43.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:43 np0005481680 ceph-mon[73608]: from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' 
Oct 12 17:42:43 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:43 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:43 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:44 np0005481680 nova_compute[264665]: 2025-10-12 21:42:44.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:44 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:45 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:45 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:45 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:45 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:46 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:47.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:47 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:47.338Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:47 np0005481680 nova_compute[264665]: 2025-10-12 21:42:47.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:47 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:47 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:47 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:42:48 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354648892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 12 17:42:48 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354648892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 12 17:42:48 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:48.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:49.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:49 np0005481680 nova_compute[264665]: 2025-10-12 21:42:49.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:49 np0005481680 nova_compute[264665]: 2025-10-12 21:42:49.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:49 np0005481680 nova_compute[264665]: 2025-10-12 21:42:49.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:49 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:49 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:49 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:49.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:50 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:50 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:50 np0005481680 nova_compute[264665]: 2025-10-12 21:42:50.659 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:50 np0005481680 nova_compute[264665]: 2025-10-12 21:42:50.662 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:50 np0005481680 nova_compute[264665]: 2025-10-12 21:42:50.663 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 12 17:42:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:51.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:51 np0005481680 podman[302828]: 2025-10-12 21:42:51.160081047 +0000 UTC m=+0.105034793 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 12 17:42:51 np0005481680 podman[302829]: 2025-10-12 21:42:51.198102044 +0000 UTC m=+0.137456088 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 12 17:42:51 np0005481680 nova_compute[264665]: 2025-10-12 21:42:51.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:51 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:51 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:51 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:52 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:52] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:42:52 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:42:52] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:42:52 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:52 np0005481680 nova_compute[264665]: 2025-10-12 21:42:52.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:53.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.663 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.696 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.696 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.696 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.697 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 12 17:42:53 np0005481680 nova_compute[264665]: 2025-10-12 21:42:53.697 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:42:53 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:53 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:53 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:54 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:42:54 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898697720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.183 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:54 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.519 2 WARNING nova.virt.libvirt.driver [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.521 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4510MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.522 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.522 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.580 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.581 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 12 17:42:54 np0005481680 nova_compute[264665]: 2025-10-12 21:42:54.595 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 12 17:42:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 12 17:42:55 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27292832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 12 17:42:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:55.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:55 np0005481680 nova_compute[264665]: 2025-10-12 21:42:55.102 2 DEBUG oslo_concurrency.processutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 12 17:42:55 np0005481680 nova_compute[264665]: 2025-10-12 21:42:55.113 2 DEBUG nova.compute.provider_tree [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed in ProviderTree for provider: d63acd5d-c9c0-44fc-813b-0eadb368ddab update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 12 17:42:55 np0005481680 nova_compute[264665]: 2025-10-12 21:42:55.131 2 DEBUG nova.scheduler.client.report [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Inventory has not changed for provider d63acd5d-c9c0-44fc-813b-0eadb368ddab based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 12 17:42:55 np0005481680 nova_compute[264665]: 2025-10-12 21:42:55.136 2 DEBUG nova.compute.resource_tracker [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 12 17:42:55 np0005481680 nova_compute[264665]: 2025-10-12 21:42:55.137 2 DEBUG oslo_concurrency.lockutils [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:42:55 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:42:55 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:55 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:55 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:55.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:56 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:57.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:42:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:57.339Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Oct 12 17:42:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:57.339Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:42:57 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:57.341Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:57 np0005481680 nova_compute[264665]: 2025-10-12 21:42:57.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:57 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:57 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:42:57 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:57.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:42:58 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:42:58 np0005481680 podman[302925]: 2025-10-12 21:42:58.584347622 +0000 UTC m=+0.098522167 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 12 17:42:58 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:42:58.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:42:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:42:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:42:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.138 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.139 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.664 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.665 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 12 17:42:59 np0005481680 nova_compute[264665]: 2025-10-12 21:42:59.680 2 DEBUG nova.compute.manager [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 12 17:42:59 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:42:59 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:42:59 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:42:59.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:00 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:00 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:01.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:01 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:01 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:01 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:01.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:02 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:02] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:43:02 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:02] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Oct 12 17:43:02 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:02 np0005481680 nova_compute[264665]: 2025-10-12 21:43:02.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:03.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:03 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:43:03 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:43:03 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:03 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:03 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:03.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:04 np0005481680 nova_compute[264665]: 2025-10-12 21:43:04.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:04 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:05.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:05 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:05 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:05 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:05 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:06 np0005481680 podman[302953]: 2025-10-12 21:43:06.13256646 +0000 UTC m=+0.088178355 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 12 17:43:06 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:07 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:07.341Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:07 np0005481680 nova_compute[264665]: 2025-10-12 21:43:07.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:07 np0005481680 nova_compute[264665]: 2025-10-12 21:43:07.674 2 DEBUG oslo_service.periodic_task [None req-793971d4-de22-4f3c-b09c-4f2e36d24032 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 12 17:43:07 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:07 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:07 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:08 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:08 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:08.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:09 np0005481680 nova_compute[264665]: 2025-10-12 21:43:09.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:09 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:09 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:09 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:09.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:10 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:10 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:11 np0005481680 systemd-logind[783]: New session 61 of user zuul.
Oct 12 17:43:11 np0005481680 systemd[1]: Started Session 61 of User zuul.
Oct 12 17:43:11 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:11 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:11 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:11.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:12 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:43:12 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:12] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Oct 12 17:43:12 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:12 np0005481680 nova_compute[264665]: 2025-10-12 21:43:12.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:13 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:13 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:43:13 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:13.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27413 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26965 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:14 np0005481680 nova_compute[264665]: 2025-10-12 21:43:14.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26968 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27425 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:14 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26977 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:15.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:15 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17943 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:15 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 12 17:43:15 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030724217' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 12 17:43:15 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:15 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:15 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:15.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:16 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:17 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:17.342Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:17 np0005481680 nova_compute[264665]: 2025-10-12 21:43:17.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:17 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:17 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:17 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Optimize plan auto_2025-10-12_21:43:18
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] do_upmap
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.nfs']
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [balancer INFO root] prepared 0/10 upmap changes
Oct 12 17:43:18 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:43:18 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:43:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:43:18.383 164459 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 12 17:43:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:43:18.383 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 12 17:43:18 np0005481680 ovn_metadata_agent[164454]: 2025-10-12 21:43:18.384 164459 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] scanning for idle connections..
Oct 12 17:43:18 np0005481680 ceph-mgr[73901]: [volumes INFO mgr_util] cleaning up connections: []
Oct 12 17:43:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:18.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Oct 12 17:43:18 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:18.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] _maybe_adjust
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 12 17:43:19 np0005481680 ceph-mgr[73901]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 12 17:43:19 np0005481680 nova_compute[264665]: 2025-10-12 21:43:19.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:19 np0005481680 ovs-vsctl[303353]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 12 17:43:19 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:19 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:19 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:20 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:20 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 12 17:43:20 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 12 17:43:20 np0005481680 virtqemud[264537]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 12 17:43:20 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27449 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:20 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.26998 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:20 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:43:20 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:43:21 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:43:21 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:43:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:21.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:21 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27464 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:21 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27013 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:21 np0005481680 podman[303584]: 2025-10-12 21:43:21.304597588 +0000 UTC m=+0.090701238 container health_status 73c86a63cc4b052b167a2f1d9605673047baff09f1b295e34189c98c4dafbdd8 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 12 17:43:21 np0005481680 podman[303592]: 2025-10-12 21:43:21.364254126 +0000 UTC m=+0.118353302 container health_status f2a7464cea61647fc2847e6335bd25d0a42669ff86226dc2fa9558e5e98b0e11 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 12 17:43:21 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: cache status {prefix=cache status} (starting...)
Oct 12 17:43:21 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:21 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: client ls {prefix=client ls} (starting...)
Oct 12 17:43:21 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:21 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27025 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:21 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27479 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:21 np0005481680 lvm[303746]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 12 17:43:21 np0005481680 lvm[303746]: VG ceph_vg0 finished
Oct 12 17:43:21 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:21 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:21 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:21.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:22 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:22] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:22] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27500 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: damage ls {prefix=damage ls} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump loads {prefix=dump loads} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17985 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:22 np0005481680 nova_compute[264665]: 2025-10-12 21:43:22.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894530133' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27533 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27070 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.791426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402791473, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1032, "num_deletes": 251, "total_data_size": 1706458, "memory_usage": 1731120, "flush_reason": "Manual Compaction"}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402802409, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1675621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38253, "largest_seqno": 39284, "table_properties": {"data_size": 1670534, "index_size": 2547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11407, "raw_average_key_size": 20, "raw_value_size": 1660216, "raw_average_value_size": 2928, "num_data_blocks": 111, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760305315, "oldest_key_time": 1760305315, "file_creation_time": 1760305402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 11002 microseconds, and 4163 cpu microseconds.
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.802442) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1675621 bytes OK
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.802458) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.803962) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.803974) EVENT_LOG_v1 {"time_micros": 1760305402803971, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.803989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1701659, prev total WAL file size 1701659, number of live WAL files 2.
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.804536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1636KB)], [83(14MB)]
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402804609, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 16375408, "oldest_snapshot_seqno": -1}
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.17991 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6870 keys, 14284811 bytes, temperature: kUnknown
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402896210, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 14284811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14240048, "index_size": 26475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 181231, "raw_average_key_size": 26, "raw_value_size": 14117321, "raw_average_value_size": 2054, "num_data_blocks": 1041, "num_entries": 6870, "num_filter_entries": 6870, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760302457, "oldest_key_time": 0, "file_creation_time": 1760305402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "695446f9-d869-48df-88e4-d00a44aa150b", "db_session_id": "PGH78N9J3MGSV7JI8MXK", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.896505) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14284811 bytes
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.898231) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 14.0 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(18.3) write-amplify(8.5) OK, records in: 7386, records dropped: 516 output_compression: NoCompression
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.898260) EVENT_LOG_v1 {"time_micros": 1760305402898247, "job": 48, "event": "compaction_finished", "compaction_time_micros": 91678, "compaction_time_cpu_micros": 45983, "output_level": 6, "num_output_files": 1, "total_output_size": 14284811, "num_input_records": 7386, "num_output_records": 6870, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402898857, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760305402903926, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.804453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.904079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.904091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.904095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.904098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mon[73608]: rocksdb: (Original Log Time 2025/10/12-21:43:22.904101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 12 17:43:22 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:22 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27548 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27091 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:23.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:23 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18003 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973369897' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: ops {prefix=ops} (starting...)
Oct 12 17:43:23 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 12 17:43:23 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280216852' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 12 17:43:23 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:23 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000024s ======
Oct 12 17:43:23 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795636045' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18039 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27599 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:43:24.204+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:24 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: session ls {prefix=session ls} (starting...)
Oct 12 17:43:24 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf Can't run that command on an inactive MDS!
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27151 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:24 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:43:24.398+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:24 np0005481680 ceph-mds[96289]: mds.cephfs.compute-0.nlzxsf asok_command: status {prefix=status} (starting...)
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415508646' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 12 17:43:24 np0005481680 nova_compute[264665]: 2025-10-12 21:43:24.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:24 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18054 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 12 17:43:24 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607316778' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277199487' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 12 17:43:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:25.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:25 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27644 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708519489' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362475621' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27196 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27659 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 12 17:43:25 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126179163' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18108 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:25 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: 2025-10-12T21:43:25.921+0000 7f37ed1f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:25 np0005481680 ceph-mgr[73901]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 12 17:43:25 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:25 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:25 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:25.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27208 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27671 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938592490' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27223 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27686 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776365142' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 12 17:43:26 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388295872' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27698 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:26 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27241 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18147 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:27.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 12 17:43:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1814951363' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27716 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:27.343Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18165 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 nova_compute[264665]: 2025-10-12 21:43:27.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27731 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 12 17:43:27 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052543145' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27280 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18180 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:27 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:27 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:27 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27749 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 991232 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b4c00 session 0x55d2570efa40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897960 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.180503845s of 42.293552399s, submitted: 3
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897828 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 974848 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 958464 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 950272 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 925696 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 942080 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 933888 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d258073860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899504 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.311359406s of 60.363491058s, submitted: 13
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899604 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 917504 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 901120 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901132 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d257da2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901132 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.569141388s of 15.600605011s, submitted: 9
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 900832 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902644 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 884736 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904156 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 851968 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.944058418s of 10.986461639s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d7400 session 0x55d255f090e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903549 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 827392 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d2578fdc20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 811008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 802816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 770048 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 761856 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903417 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.344142914s of 29.355075836s, submitted: 3
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 745472 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903565 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 1794048 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 1777664 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902806 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.005134583s of 12.044654846s, submitted: 10
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 1761280 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 1736704 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 1712128 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2583081e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1703936 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 1695744 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 71.363334656s of 71.371582031s, submitted: 2
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902367 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 1679360 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1662976 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902383 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1646592 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902383 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1622016 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.147457123s of 13.186235428s, submitted: 9
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902083 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1605632 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 1589248 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1572864 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1564672 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 1540096 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 1531904 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7730 writes, 30K keys, 7730 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7730 writes, 1603 syncs, 4.82 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 695 writes, 1209 keys, 695 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 695 writes, 339 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d253cef350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 1515520 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2570d6000 session 0x55d25807fe00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d2563fd2c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 1507328 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 1499136 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 89.622047424s of 89.626823425s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 1466368 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902515 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 1458176 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1433600 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1409024 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1392640 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902515 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.022595406s of 11.070187569s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902367 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d2563f74a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902235 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.939117432s of 28.950923920s, submitted: 3
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650b400 session 0x55d2588e21e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903895 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.930031776s of 14.969452858s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903747 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 903879 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.807968140s of 11.821829796s, submitted: 3
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905407 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650ac00 session 0x55d257fc23c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904648 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.167273521s of 10.208182335s, submitted: 10
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988450050s of 10.001911163s, submitted: 3
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d258498f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 180224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 172032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.289288521s of 10.427827835s, submitted: 17
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca85000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 106496 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 1064960 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904780 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 1015808 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 1007616 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 983040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904816 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 983040 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 974848 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 966656 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.564165115s of 12.723128319s, submitted: 211
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 958464 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 942080 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 933888 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 925696 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread fragmentation_score=0.000026 took=0.000067s
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2588e2d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 917504 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d25647e780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 909312 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904668 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 892928 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 884736 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.594898224s of 63.763050079s, submitted: 2
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904800 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 851968 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906460 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 811008 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 794624 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906460 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.920207977s of 13.966058731s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906328 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 786432 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 778240 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 12 17:43:28 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1418055807' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 745472 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2558b3c00 session 0x55d257fc2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905605 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 83.397789001s of 83.438568115s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 737280 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905737 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.383620262s of 15.857924461s, submitted: 9
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d2555b5800 session 0x55d2588f05a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905437 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905589 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 712704 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514731407s of 13.519078255s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905721 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 753664 heap: 83845120 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907249 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a400 session 0x55d2589712c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 761856 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.442175865s of 14.482097626s, submitted: 10
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908461 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 753664 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 745472 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 737280 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908761 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 720896 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486126900s of 11.537532806s, submitted: 10
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 712704 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907563 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18195 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907431 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0xef223/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 ms_handle_reset con 0x55d25650a000 session 0x55d257fc3a40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.445034027s of 58.675991058s, submitted: 4
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 704512 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916064 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 663552 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 138 ms_handle_reset con 0x55d2570d6000 session 0x55d2564ca000
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc668000/0x0/0x4ffc00000, data 0xf559d/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 10952704 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 139 ms_handle_reset con 0x55d2570d6000 session 0x55d256453680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0x5676d8/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 139 ms_handle_reset con 0x55d25650b400 session 0x55d2564cbc20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957955 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f0000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 10936320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 10928128 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960845 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 10928128 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.486613274s of 11.676462173s, submitted: 57
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 10919936 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960021 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 10911744 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960153 data_alloc: 218103808 data_used: 98304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 10903552 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.504811287s of 10.536725044s, submitted: 8
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 10887168 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 11018240 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961517 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 11018240 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961517 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.018886566s of 10.053460121s, submitted: 8
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961385 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961385 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 ms_handle_reset con 0x55d25650a000 session 0x55d2563650e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 ms_handle_reset con 0x55d25650a400 session 0x55d257fc2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0x5696aa/0x61a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 11010048 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961537 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.855613708s of 13.859876633s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 11001856 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a800 session 0x55d258498960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 10756096 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 10756096 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a000 session 0x55d2580730e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbfd7000/0x0/0x4ffc00000, data 0x7808d6/0x833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989469 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650a400 session 0x55d2586854a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 10723328 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 ms_handle_reset con 0x55d25650b400 session 0x55d257f6cb40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d258072960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10412032 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 10412032 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 10207232 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 9633792 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008265 data_alloc: 218103808 data_used: 2273280
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 8216576 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008265 data_alloc: 218103808 data_used: 2273280
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbfb1000/0x0/0x4ffc00000, data 0x7a68a8/0x85a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 8208384 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.823438644s of 18.955329895s, submitted: 37
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 3637248 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037537 data_alloc: 218103808 data_used: 3076096
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 1089536 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbd7f000/0x0/0x4ffc00000, data 0x9d38a8/0xa87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039589 data_alloc: 218103808 data_used: 3117056
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93478912 unmapped: 1777664 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93528064 unmapped: 1728512 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040501 data_alloc: 218103808 data_used: 3186688
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305800 session 0x55d258a48f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25893af00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93536256 unmapped: 1720320 heap: 95256576 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.227094650s of 23.355901718s, submitted: 39
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257def860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 7979008 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86d000/0x0/0x4ffc00000, data 0xd4b8a8/0xdff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93708288 unmapped: 7979008 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 7897088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 3186688
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d257dee5a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86d000/0x0/0x4ffc00000, data 0xd4b8a8/0xdff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93790208 unmapped: 7897088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d257dee780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305c00 session 0x55d2578a21e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25788c1e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069989 data_alloc: 218103808 data_used: 3186688
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 7888896 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 93814784 unmapped: 7872512 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 6242304 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084581 data_alloc: 218103808 data_used: 5349376
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 5898240 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5881856 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084581 data_alloc: 218103808 data_used: 5349376
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95838208 unmapped: 5849088 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95854592 unmapped: 5832704 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.023689270s of 21.075399399s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa86c000/0x0/0x4ffc00000, data 0xd4b8b8/0xe00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 4456448 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa346000/0x0/0x4ffc00000, data 0x12718b8/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97271808 unmapped: 4415488 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa2eb000/0x0/0x4ffc00000, data 0x12cb8b8/0x1380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135165 data_alloc: 218103808 data_used: 5574656
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2588d2d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 4374528 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 4489216 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133773 data_alloc: 218103808 data_used: 5566464
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 4251648 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa24c000/0x0/0x4ffc00000, data 0x136b8b8/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 4251648 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d25893a960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d2564cb2c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa24c000/0x0/0x4ffc00000, data 0x136b8b8/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 97443840 unmapped: 4243456 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6000 session 0x55d2588f0780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046014 data_alloc: 218103808 data_used: 3174400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.008966446s of 13.345650673s, submitted: 79
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd9000/0x0/0x4ffc00000, data 0x9df8a8/0xa93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96296960 unmapped: 5390336 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d25893a780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257390000 session 0x55d2570ed4a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047674 data_alloc: 218103808 data_used: 3170304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257d8ed20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 7143424 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 7143424 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 7536640 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982639 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982032 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.694840431s of 15.787263870s, submitted: 17
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981900 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d258956960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d25893a1e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d255f08b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 7520256 heap: 101687296 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2580623c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000614 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257fc3e00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257390000 session 0x55d257da3c20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d256364f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d257da25a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2584983c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d258970d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007440 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 9576448 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007440 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 9150464 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 9093120 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025224 data_alloc: 218103808 data_used: 2793472
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.343212128s of 23.383264542s, submitted: 8
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 9469952 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 9469952 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 9674752 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024988 data_alloc: 218103808 data_used: 2801664
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 8192000 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fadaf000/0x0/0x4ffc00000, data 0x8088b8/0x8bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96108544 unmapped: 7749632 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 5799936 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 5799936 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99106816 unmapped: 4751360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060953 data_alloc: 218103808 data_used: 2981888
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061105 data_alloc: 218103808 data_used: 2985984
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99115008 unmapped: 4743168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.644647598s of 15.819246292s, submitted: 49
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5783552 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8910 writes, 33K keys, 8910 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8910 writes, 2141 syncs, 4.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1180 writes, 2991 keys, 1180 commit groups, 1.0 writes per commit group, ingest: 2.88 MB, 0.00 MB/s#012Interval WAL: 1180 writes, 538 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5783552 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060973 data_alloc: 218103808 data_used: 2985984
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98082816 unmapped: 5775360 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060973 data_alloc: 218103808 data_used: 2985984
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa914000/0x0/0x4ffc00000, data 0xca38b8/0xd58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258017680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5767168 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2563645a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.520689011s of 13.525539398s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 5758976 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987063 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2563fcb40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987063 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 7684096 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987195 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96182272 unmapped: 7675904 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.768692970s of 12.803792000s, submitted: 10
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96198656 unmapped: 7659520 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96198656 unmapped: 7659520 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96206848 unmapped: 7651328 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988723 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96215040 unmapped: 7643136 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d2581c41e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96223232 unmapped: 7634944 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25807fe00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258970f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258305000 session 0x55d257da34a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588e2b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d254a8c780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003815 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2563f61e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6d000/0x0/0x4ffc00000, data 0x64a90a/0x6ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 7454720 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.318846703s of 12.507612228s, submitted: 44
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d258308000
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004804 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6d000/0x0/0x4ffc00000, data 0x64a90a/0x6ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 7700480 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 7700480 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010968 data_alloc: 218103808 data_used: 921600
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 7577600 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96288768 unmapped: 7569408 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.152463913s of 10.198073387s, submitted: 12
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011132 data_alloc: 218103808 data_used: 917504
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 7520256 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf6c000/0x0/0x4ffc00000, data 0x64a92d/0x700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 7520256 heap: 103858176 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faecd000/0x0/0x4ffc00000, data 0x6e992d/0x79f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1,1,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 6111232 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa74f000/0x0/0x4ffc00000, data 0xe6192d/0xf17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa713000/0x0/0x4ffc00000, data 0xe9b92d/0xf51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086070 data_alloc: 218103808 data_used: 1970176
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99016704 unmapped: 6512640 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa71b000/0x0/0x4ffc00000, data 0xe9b92d/0xf51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 6471680 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079950 data_alloc: 218103808 data_used: 1974272
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa718000/0x0/0x4ffc00000, data 0xe9e92d/0xf54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99065856 unmapped: 6463488 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa718000/0x0/0x4ffc00000, data 0xe9e92d/0xf54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079950 data_alloc: 218103808 data_used: 1974272
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.148156166s of 15.456823349s, submitted: 125
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa717000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa717000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 6447104 heap: 105529344 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9400 session 0x55d2563f6d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d257176b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2575272c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d258685860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 13008896 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d257177c20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9400 session 0x55d2583083c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d257da2f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257da25a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d257da3860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa041000/0x0/0x4ffc00000, data 0x157493d/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131730 data_alloc: 218103808 data_used: 1974272
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa041000/0x0/0x4ffc00000, data 0x157493d/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 13025280 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d257da2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2f800 session 0x55d2563f8b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 13017088 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133517 data_alloc: 218103808 data_used: 1978368
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 13017088 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101785600 unmapped: 10043392 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181397 data_alloc: 218103808 data_used: 9056256
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 7831552 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 7798784 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 7798784 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.626377106s of 18.689212799s, submitted: 15
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 7766016 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181989 data_alloc: 218103808 data_used: 9060352
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 7766016 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 4923392 heap: 111828992 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1574960/0x162c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,10])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110370816 unmapped: 3637248 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110575616 unmapped: 3432448 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed960/0x1fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260019 data_alloc: 234881024 data_used: 10125312
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5152768 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed960/0x1fa5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258859 data_alloc: 234881024 data_used: 10125312
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2575c4d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.864350319s of 11.624783516s, submitted: 79
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 5120000 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257837 data_alloc: 234881024 data_used: 10121216
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c7000/0x0/0x4ffc00000, data 0x1eed950/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 9945088 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eed950/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,0,6])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 10371072 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d25893a000
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 10362880 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091164 data_alloc: 218103808 data_used: 1978368
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f950/0xf56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1.173839927s of 10.133413315s, submitted: 14
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 10354688 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2570ecd20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090609 data_alloc: 218103808 data_used: 1974272
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa716000/0x0/0x4ffc00000, data 0xe9f92d/0xf55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d258957e00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102162432 unmapped: 11845632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090245 data_alloc: 218103808 data_used: 1974272
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.528004646s of 10.041405678s, submitted: 43
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101138432 unmapped: 12869632 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8cb/0x624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d258309a40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 12795904 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010625 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010349 data_alloc: 218103808 data_used: 102400
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6800 session 0x55d254a8cb40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b4c00 session 0x55d258308d20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.167170525s of 10.927146912s, submitted: 18
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25560a400 session 0x55d2564dc780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564521e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 12779520 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 12713984 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009610 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564ca780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2581c45a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101449728 unmapped: 12558336 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.099966049s of 17.958248138s, submitted: 233
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 99876864 unmapped: 14131200 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022171 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d2581c4f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2581c52c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2558b3c00 session 0x55d2583090e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 13074432 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022187 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0x63790a/0x6ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255f09860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 13066240 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027416 data_alloc: 218103808 data_used: 724992
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.301165581s of 14.841221809s, submitted: 29
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027500 data_alloc: 218103808 data_used: 729088
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0x63792d/0x6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 13049856 heap: 114008064 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102293504 unmapped: 12763136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102293504 unmapped: 12763136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050564 data_alloc: 218103808 data_used: 946176
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 12673024 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabfd000/0x0/0x4ffc00000, data 0x9b992d/0xa6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057846 data_alloc: 218103808 data_used: 1101824
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 13787136 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101400576 unmapped: 13656064 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069133759s of 12.610257149s, submitted: 56
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25807e960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabdc000/0x0/0x4ffc00000, data 0x9da92d/0xa90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabdc000/0x0/0x4ffc00000, data 0x9da92d/0xa90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056818 data_alloc: 218103808 data_used: 1105920
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 13516800 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057218 data_alloc: 218103808 data_used: 1110016
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x9e092d/0xa96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd6000/0x0/0x4ffc00000, data 0x9e092d/0xa96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.912015915s of 10.996917725s, submitted: 5
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 13508608 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057610 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101556224 unmapped: 13500416 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 13492224 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9e392d/0xa99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101564416 unmapped: 13492224 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101572608 unmapped: 13484032 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fabd3000/0x0/0x4ffc00000, data 0x9e392d/0xa99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058178 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 13352960 heap: 115056640 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d2588cbc20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588ca5a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573a5400 session 0x55d2588cab40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2588cb4a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588cb680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588ca3c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d255f3ed20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258304400 session 0x55d255f3e3c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255f3f860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 19628032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 19628032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 19619840 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080050 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 19619840 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.358315468s of 13.568110466s, submitted: 6
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c4000/0x0/0x4ffc00000, data 0xcf193d/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257fc25a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2589572c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d257e17860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b9000 session 0x55d25793f680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2563f83c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 19611648 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082184 data_alloc: 218103808 data_used: 1122304
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c3000/0x0/0x4ffc00000, data 0xcf194c/0xda9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 18866176 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8c0000/0x0/0x4ffc00000, data 0xcf494c/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102916 data_alloc: 218103808 data_used: 4268032
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 18857984 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: mgrc ms_handle_reset ms_handle_reset con 0x55d25650b800
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3916108464
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3916108464,v1:192.168.122.100:6801/3916108464]
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: mgrc handle_mgr_configure stats_period=5
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.356122017s of 12.378782272s, submitted: 6
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0xcfa94c/0xdb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 18939904 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103484 data_alloc: 218103808 data_used: 4268032
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 18481152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 18481152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 17809408 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5be000/0x0/0x4ffc00000, data 0xff694c/0x10ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103546880 unmapped: 17809408 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125352 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5b7000/0x0/0x4ffc00000, data 0xffd94c/0x10b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103555072 unmapped: 17801216 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.289596558s of 11.366083145s, submitted: 17
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127274 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5b4000/0x0/0x4ffc00000, data 0x100094c/0x10b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127142 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 17752064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 17752064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127182 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5af000/0x0/0x4ffc00000, data 0x100594c/0x10bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.827489853s of 11.043312073s, submitted: 5
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 17735680 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5aa000/0x0/0x4ffc00000, data 0x100a94c/0x10c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127806 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127694 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.485318184s of 11.773607254s, submitted: 4
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a7000/0x0/0x4ffc00000, data 0x100d94c/0x10c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127782 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5a4000/0x0/0x4ffc00000, data 0x101094c/0x10c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127822 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59f000/0x0/0x4ffc00000, data 0x101594c/0x10cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 17719296 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127822 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.699817657s of 12.715748787s, submitted: 4
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59c000/0x0/0x4ffc00000, data 0x101894c/0x10d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa59c000/0x0/0x4ffc00000, data 0x101894c/0x10d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 17637376 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128430 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 17629184 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa597000/0x0/0x4ffc00000, data 0x101d94c/0x10d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128334 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.147480965s of 11.409416199s, submitted: 4
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa597000/0x0/0x4ffc00000, data 0x101d94c/0x10d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128422 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa594000/0x0/0x4ffc00000, data 0x102094c/0x10d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa592000/0x0/0x4ffc00000, data 0x102294c/0x10da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 17588224 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128446 data_alloc: 218103808 data_used: 4333568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d257fc3680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25608c3c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa591000/0x0/0x4ffc00000, data 0x102394c/0x10db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103776256 unmapped: 17580032 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.306691170s of 12.825207710s, submitted: 5
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2588e3c20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065426 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab8f000/0x0/0x4ffc00000, data 0xa2693c/0xadd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2578a2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064381 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f0f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.254345894s of 10.118084908s, submitted: 12
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102383616 unmapped: 18972672 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064309 data_alloc: 218103808 data_used: 1118208
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fab90000/0x0/0x4ffc00000, data 0xa2692d/0xadc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100810752 unmapped: 20545536 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257177860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019599 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 20529152 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019599 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.282787323s of 14.191265106s, submitted: 20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019467 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 20611072 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d258073a40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588f1e00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2573c0000 session 0x55d2588cad20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d258308b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.678023338s of 13.685975075s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2588d30e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d25893a5a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d25807e5a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d25793fe00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2571761e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066720 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d257176f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d257176b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257177680
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e2b40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100777984 unmapped: 20578304 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079032 data_alloc: 218103808 data_used: 1806336
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102440960 unmapped: 18915328 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e3a40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faaca000/0x0/0x4ffc00000, data 0xaec91a/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 18882560 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.982872009s of 10.151477814s, submitted: 46
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2584981e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024870 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024870 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103047168 unmapped: 18309120 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d258309c20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fae39000/0x0/0x4ffc00000, data 0x77f8a8/0x833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039122 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 20471808 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.847422600s of 15.979025841s, submitted: 2
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588e25a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faa28000/0x0/0x4ffc00000, data 0x77f8cb/0x834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 20455424 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 20455424 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040927 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100917248 unmapped: 20439040 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d254adda40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4faa28000/0x0/0x4ffc00000, data 0x77f8cb/0x834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d258062780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 21094400 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026795 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.972145081s of 32.033672333s, submitted: 16
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d255f090e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 20832256 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057517 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 20824064 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 20619264 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083205 data_alloc: 218103808 data_used: 3915776
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 20176896 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101187584 unmapped: 20168704 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083205 data_alloc: 218103808 data_used: 3915776
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 20160512 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa87b000/0x0/0x4ffc00000, data 0x92d8a8/0x9e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.084077835s of 18.120235443s, submitted: 5
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 15065088 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 106307584 unmapped: 15048704 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124313 data_alloc: 218103808 data_used: 4767744
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129111 data_alloc: 218103808 data_used: 4808704
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130039 data_alloc: 218103808 data_used: 4833280
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d255ee4960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd248a8/0xdd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 15728640 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.602562904s of 15.801031113s, submitted: 58
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588cba40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 17793024 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030615 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 17784832 heap: 121356288 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.333866119s of 22.364822388s, submitted: 7
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f1860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080521 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 23085056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa620000/0x0/0x4ffc00000, data 0xb888a8/0xc3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588f1e00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 22781952 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 22781952 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 21233664 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128839 data_alloc: 218103808 data_used: 6434816
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 21037056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 21037056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105668608 unmapped: 21012480 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129143 data_alloc: 218103808 data_used: 6492160
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa5fb000/0x0/0x4ffc00000, data 0xbac8cb/0xc61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 20979712 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.974012375s of 18.638038635s, submitted: 19
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 18014208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162431 data_alloc: 218103808 data_used: 6524928
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 15548416 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e58000/0x0/0x4ffc00000, data 0x13308cb/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257da2f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 14794752 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203685 data_alloc: 218103808 data_used: 6742016
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 14786560 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 14786560 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 14778368 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9ced000/0x0/0x4ffc00000, data 0x14ba8cb/0x156f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 14778368 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2575c63c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112205824 unmapped: 14475264 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208050 data_alloc: 218103808 data_used: 6742016
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.759752274s of 11.006592751s, submitted: 72
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 14737408 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 14737408 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 14180352 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219126 data_alloc: 218103808 data_used: 8359936
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 14016512 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x14de8cb/0x1593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219126 data_alloc: 218103808 data_used: 8359936
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 14008320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.875556946s of 10.879505157s, submitted: 1
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 14327808 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b9a000/0x0/0x4ffc00000, data 0x160d8cb/0x16c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 14254080 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248186 data_alloc: 234881024 data_used: 9187328
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 14090240 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0x16958cb/0x174a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248186 data_alloc: 234881024 data_used: 9187328
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 14057472 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 14041088 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.649332047s of 10.708267212s, submitted: 15
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 14548992 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 14548992 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b10000/0x0/0x4ffc00000, data 0x16968cb/0x174b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248226 data_alloc: 234881024 data_used: 9256960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9b10000/0x0/0x4ffc00000, data 0x16968cb/0x174b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 14540800 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08400 session 0x55d25899e000
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08000 session 0x55d2578a3860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 16490496 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d257fc3860
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196350 data_alloc: 218103808 data_used: 6742016
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e76000/0x0/0x4ffc00000, data 0x13318cb/0x13e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f9e76000/0x0/0x4ffc00000, data 0x13318cb/0x13e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2581c4f00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.367655754s of 12.646665573s, submitted: 21
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2588ca960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 16482304 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196218 data_alloc: 218103808 data_used: 6742016
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 21921792 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2588e2960
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046418 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 21913600 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.658111572s of 27.196340561s, submitted: 36
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2563f74a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa915000/0x0/0x4ffc00000, data 0x8938a8/0x947000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071844 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2589b8c00 session 0x55d2581c52c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d255197a40
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5c00 session 0x55d2564dcd20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 21749760 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a000 session 0x55d2588d2780
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 21741568 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 21741568 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075127 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096863 data_alloc: 218103808 data_used: 3309568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa913000/0x0/0x4ffc00000, data 0x8938db/0x949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096863 data_alloc: 218103808 data_used: 3309568
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.147645950s of 18.915796280s, submitted: 11
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 21700608 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 16400384 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109608960 unmapped: 17072128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa405000/0x0/0x4ffc00000, data 0xda18db/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109912064 unmapped: 16769024 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0xda78db/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145195 data_alloc: 218103808 data_used: 3702784
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109690880 unmapped: 16990208 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145211 data_alloc: 218103808 data_used: 3702784
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145211 data_alloc: 218103808 data_used: 3702784
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 16973824 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d258e08400 session 0x55d257e165a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d257e2ec00 session 0x55d2570ee3c0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 16982016 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.207839966s of 16.376241684s, submitted: 58
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fa3e7000/0x0/0x4ffc00000, data 0xdb78db/0xe6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2555b5800 session 0x55d2564de1e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 19120128 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3002 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2014 writes, 6494 keys, 2014 commit groups, 1.0 writes per commit group, ingest: 7.68 MB, 0.01 MB/s#012Interval WAL: 2014 writes, 861 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 19111936 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 19103744 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 19095552 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 19087360 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 19079168 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 19128320 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107708416 unmapped: 18972672 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 18989056 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 19013632 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'log dump' '{prefix=log dump}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30056448 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf dump' '{prefix=perf dump}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf schema' '{prefix=perf schema}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 30359552 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107372544 unmapped: 30351360 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 30343168 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 30334976 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107397120 unmapped: 30326784 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 30318592 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 30310400 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 30302208 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 30294016 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 30294016 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 30294016 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 30294016 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053960 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 30294016 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 174.793258667s of 175.275375366s, submitted: 19
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 30285824 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac38000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,1])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30056448 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 29908992 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 29900800 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 29892608 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 29884416 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 29876224 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 29868032 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 29859840 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107872256 unmapped: 29851648 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 29843456 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27764 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107888640 unmapped: 29835264 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107896832 unmapped: 29827072 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 29818880 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 29810688 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 29794304 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 29786112 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 29777920 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 29769728 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29761536 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 29753344 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 29745152 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 29736960 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650a400 session 0x55d2589574a0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d2570d6800 session 0x55d2589561e0
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650ac00 session 0x55d2564dcf00
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 ms_handle_reset con 0x55d25650b400 session 0x55d257f6dc20
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 29728768 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fac39000/0x0/0x4ffc00000, data 0x56f8a8/0x623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053668 data_alloc: 218103808 data_used: 106496
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 29720576 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}'
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 29679616 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: prioritycache tune_memory target: 4294967296 mapped: 108371968 unmapped: 29351936 heap: 137723904 old mem: 2845415832 new mem: 2845415832
Oct 12 17:43:28 np0005481680 ceph-osd[81892]: do_command 'log dump' '{prefix=log dump}'
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:28 np0005481680 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27307 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18201 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:28 np0005481680 podman[304920]: 2025-10-12 21:43:28.752587367 +0000 UTC m=+0.091078208 container health_status af7171e99b20af3d9a2099327aa9f7fa909316286490240c907554fc36fef950 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 12 17:43:28 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27782 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:28 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:28.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18210 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27325 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 12 17:43:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779828192' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 12 17:43:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27800 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:29 np0005481680 nova_compute[264665]: 2025-10-12 21:43:29.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27343 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:29 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 12 17:43:29 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608016268' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18231 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:29 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:29 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:29 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:29 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27827 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:30 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18255 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:30 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:30 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18273 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:30 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 12 17:43:30 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183917476' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 12 17:43:31 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18288 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097048141' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 12 17:43:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988416639' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1353013950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440026184' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 12 17:43:31 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004989776' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 12 17:43:31 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:31 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:31 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: [prometheus INFO cherrypy.access.139877673511616] ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:32] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:43:32 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-mgr-compute-0-fmjeht[73897]: ::ffff:192.168.122.100 - - [12/Oct/2025:21:43:32] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27932 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553368325' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104591709' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:32 np0005481680 nova_compute[264665]: 2025-10-12 21:43:32.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27956 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27950 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27451 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007970412' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 12 17:43:32 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826030307' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27974 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426341008' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 12 17:43:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.001000025s ======
Oct 12 17:43:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:33.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506432746' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 systemd[1]: Starting Hostname Service...
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/1747286179' entity='mgr.compute-0.fmjeht' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27986 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 systemd[1]: Started Hostname Service.
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036280089' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313804930' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.28004 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 12 17:43:33 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687021227' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 12 17:43:33 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:33 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:33 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:33.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27508 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 12 17:43:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754941995' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.28019 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18417 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27523 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18423 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 nova_compute[264665]: 2025-10-12 21:43:34.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.28040 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18438 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18432 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27538 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:34 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.28052 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:35.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 12 17:43:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18453 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27562 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:43:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:43:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18471 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:35 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:35 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 12 17:43:35 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517684175' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 12 17:43:35 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:35 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.002000049s ======
Oct 12 17:43:35 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:35.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27598 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18492 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27613 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 12 17:43:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927591597' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18507 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.27628 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 12 17:43:36 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113166252' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 12 17:43:36 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 12 17:43:37 np0005481680 podman[306083]: 2025-10-12 21:43:37.099612396 +0000 UTC m=+0.063015475 container health_status 930df8e53033f78a255484c5cae7a08f4d79bbf125ba11b17027bae21ef7b15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 12 17:43:37 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 12 17:43:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766854540' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 12 17:43:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.102 - anonymous [12/Oct/2025:21:43:37.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 12 17:43:37 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 12 17:43:37 np0005481680 ceph-5adb8c35-1b74-5730-a252-62321f654cd5-alertmanager-compute-0[103555]: ts=2025-10-12T21:43:37.351Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Oct 12 17:43:37 np0005481680 nova_compute[264665]: 2025-10-12 21:43:37.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 12 17:43:37 np0005481680 radosgw[95273]: ====== starting new request req=0x7f509b0e75d0 =====
Oct 12 17:43:37 np0005481680 radosgw[95273]: ====== req done req=0x7f509b0e75d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 12 17:43:37 np0005481680 radosgw[95273]: beast: 0x7f509b0e75d0: 192.168.122.100 - anonymous [12/Oct/2025:21:43:37.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 12 17:43:38 np0005481680 ceph-mon[73608]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 12 17:43:38 np0005481680 ceph-mon[73608]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854193228' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 12 17:43:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.28154 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 12 17:43:38 np0005481680 ceph-mgr[73901]: log_channel(cluster) log [DBG] : pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail
Oct 12 17:43:38 np0005481680 ceph-mgr[73901]: log_channel(audit) log [DBG] : from='client.18570 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
